Test Report: KVM_Linux_crio 20327

                    
                      42aa66410b215ebc171a5bcfa49a23d455b53987:2025-01-27:38094
                    
                

Test fail (11/316)

x
+
TestAddons/parallel/Ingress (152.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-293977 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-293977 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-293977 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [08949e9b-1809-4b30-b1c1-81a95fc4e265] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [08949e9b-1809-4b30-b1c1-81a95fc4e265] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003808298s
I0127 13:06:21.902879  562636 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-293977 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.527380389s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-293977 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.12
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-293977 -n addons-293977
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 logs -n 25: (1.404305064s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-540094                                                                     | download-only-540094 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| delete  | -p download-only-343942                                                                     | download-only-343942 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| delete  | -p download-only-540094                                                                     | download-only-540094 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-945232 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	|         | binary-mirror-945232                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34777                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-945232                                                                     | binary-mirror-945232 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| addons  | enable dashboard -p                                                                         | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	|         | addons-293977                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	|         | addons-293977                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-293977 --wait=true                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:05 UTC | 27 Jan 25 13:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:05 UTC | 27 Jan 25 13:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:05 UTC | 27 Jan 25 13:05 UTC |
	|         | -p addons-293977                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:05 UTC | 27 Jan 25 13:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-293977 ip                                                                            | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-293977 ssh curl -s                                                                   | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-293977 ssh cat                                                                       | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | /opt/local-path-provisioner/pvc-cadd48f3-e676-45ff-bd54-b2a580221202_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-293977 addons disable                                                                | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-293977 addons                                                                        | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-293977 ip                                                                            | addons-293977        | jenkins | v1.35.0 | 27 Jan 25 13:08 UTC | 27 Jan 25 13:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:03:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:03:23.951173  563271 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:03:23.951290  563271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:23.951299  563271 out.go:358] Setting ErrFile to fd 2...
	I0127 13:03:23.951303  563271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:23.951475  563271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:03:23.952490  563271 out.go:352] Setting JSON to false
	I0127 13:03:23.953701  563271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13549,"bootTime":1737969455,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:03:23.953759  563271 start.go:139] virtualization: kvm guest
	I0127 13:03:23.955531  563271 out.go:177] * [addons-293977] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:03:23.956782  563271 notify.go:220] Checking for updates...
	I0127 13:03:23.956801  563271 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:03:23.957954  563271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:03:23.959070  563271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:03:23.960273  563271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:03:23.961371  563271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:03:23.962419  563271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:03:23.963619  563271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:03:23.994847  563271 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:03:23.995949  563271 start.go:297] selected driver: kvm2
	I0127 13:03:23.995963  563271 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:03:23.995975  563271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:03:23.996605  563271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:03:23.996708  563271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:03:24.011147  563271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:03:24.011191  563271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:03:24.011411  563271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:03:24.011442  563271 cni.go:84] Creating CNI manager for ""
	I0127 13:03:24.011490  563271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:03:24.011498  563271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:03:24.011535  563271 start.go:340] cluster config:
	{Name:addons-293977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-293977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 13:03:24.011641  563271 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:03:24.013811  563271 out.go:177] * Starting "addons-293977" primary control-plane node in "addons-293977" cluster
	I0127 13:03:24.014846  563271 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:03:24.014869  563271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:03:24.014879  563271 cache.go:56] Caching tarball of preloaded images
	I0127 13:03:24.014972  563271 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:03:24.014985  563271 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:03:24.015336  563271 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/config.json ...
	I0127 13:03:24.015360  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/config.json: {Name:mkcaa2e86fa99ec0eca6e636fa2d0c6eef9664bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:24.015486  563271 start.go:360] acquireMachinesLock for addons-293977: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:03:24.015530  563271 start.go:364] duration metric: took 31.527µs to acquireMachinesLock for "addons-293977"
	I0127 13:03:24.015546  563271 start.go:93] Provisioning new machine with config: &{Name:addons-293977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-293977 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:03:24.015590  563271 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 13:03:24.016923  563271 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 13:03:24.017031  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:03:24.017071  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:03:24.030054  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I0127 13:03:24.030500  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:03:24.031103  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:03:24.031127  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:03:24.031446  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:03:24.031622  563271 main.go:141] libmachine: (addons-293977) Calling .GetMachineName
	I0127 13:03:24.031752  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:24.031898  563271 start.go:159] libmachine.API.Create for "addons-293977" (driver="kvm2")
	I0127 13:03:24.031929  563271 client.go:168] LocalClient.Create starting
	I0127 13:03:24.031965  563271 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 13:03:24.223246  563271 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 13:03:24.301549  563271 main.go:141] libmachine: Running pre-create checks...
	I0127 13:03:24.301565  563271 main.go:141] libmachine: (addons-293977) Calling .PreCreateCheck
	I0127 13:03:24.301950  563271 main.go:141] libmachine: (addons-293977) Calling .GetConfigRaw
	I0127 13:03:24.302315  563271 main.go:141] libmachine: Creating machine...
	I0127 13:03:24.302327  563271 main.go:141] libmachine: (addons-293977) Calling .Create
	I0127 13:03:24.302467  563271 main.go:141] libmachine: (addons-293977) creating KVM machine...
	I0127 13:03:24.302484  563271 main.go:141] libmachine: (addons-293977) creating network...
	I0127 13:03:24.303608  563271 main.go:141] libmachine: (addons-293977) DBG | found existing default KVM network
	I0127 13:03:24.304279  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:24.304141  563293 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I0127 13:03:24.304313  563271 main.go:141] libmachine: (addons-293977) DBG | created network xml: 
	I0127 13:03:24.304333  563271 main.go:141] libmachine: (addons-293977) DBG | <network>
	I0127 13:03:24.304344  563271 main.go:141] libmachine: (addons-293977) DBG |   <name>mk-addons-293977</name>
	I0127 13:03:24.304352  563271 main.go:141] libmachine: (addons-293977) DBG |   <dns enable='no'/>
	I0127 13:03:24.304362  563271 main.go:141] libmachine: (addons-293977) DBG |   
	I0127 13:03:24.304371  563271 main.go:141] libmachine: (addons-293977) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 13:03:24.304388  563271 main.go:141] libmachine: (addons-293977) DBG |     <dhcp>
	I0127 13:03:24.304506  563271 main.go:141] libmachine: (addons-293977) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 13:03:24.304523  563271 main.go:141] libmachine: (addons-293977) DBG |     </dhcp>
	I0127 13:03:24.304530  563271 main.go:141] libmachine: (addons-293977) DBG |   </ip>
	I0127 13:03:24.304536  563271 main.go:141] libmachine: (addons-293977) DBG |   
	I0127 13:03:24.304542  563271 main.go:141] libmachine: (addons-293977) DBG | </network>
	I0127 13:03:24.304551  563271 main.go:141] libmachine: (addons-293977) DBG | 
	I0127 13:03:24.309080  563271 main.go:141] libmachine: (addons-293977) DBG | trying to create private KVM network mk-addons-293977 192.168.39.0/24...
	I0127 13:03:24.373284  563271 main.go:141] libmachine: (addons-293977) DBG | private KVM network mk-addons-293977 192.168.39.0/24 created
	I0127 13:03:24.373367  563271 main.go:141] libmachine: (addons-293977) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977 ...
	I0127 13:03:24.373386  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:24.373258  563293 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:03:24.373407  563271 main.go:141] libmachine: (addons-293977) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:03:24.373426  563271 main.go:141] libmachine: (addons-293977) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 13:03:24.647515  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:24.647336  563293 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa...
	I0127 13:03:24.716629  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:24.716501  563293 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/addons-293977.rawdisk...
	I0127 13:03:24.716667  563271 main.go:141] libmachine: (addons-293977) DBG | Writing magic tar header
	I0127 13:03:24.716683  563271 main.go:141] libmachine: (addons-293977) DBG | Writing SSH key tar header
	I0127 13:03:24.716704  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:24.716654  563293 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977 ...
	I0127 13:03:24.716798  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977
	I0127 13:03:24.716829  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977 (perms=drwx------)
	I0127 13:03:24.716837  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 13:03:24.716848  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:03:24.716854  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 13:03:24.716863  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 13:03:24.716868  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home/jenkins
	I0127 13:03:24.716875  563271 main.go:141] libmachine: (addons-293977) DBG | checking permissions on dir: /home
	I0127 13:03:24.716880  563271 main.go:141] libmachine: (addons-293977) DBG | skipping /home - not owner
	I0127 13:03:24.716887  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 13:03:24.716896  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 13:03:24.716905  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 13:03:24.716944  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 13:03:24.716970  563271 main.go:141] libmachine: (addons-293977) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 13:03:24.716979  563271 main.go:141] libmachine: (addons-293977) creating domain...
	I0127 13:03:24.717995  563271 main.go:141] libmachine: (addons-293977) define libvirt domain using xml: 
	I0127 13:03:24.718011  563271 main.go:141] libmachine: (addons-293977) <domain type='kvm'>
	I0127 13:03:24.718020  563271 main.go:141] libmachine: (addons-293977)   <name>addons-293977</name>
	I0127 13:03:24.718027  563271 main.go:141] libmachine: (addons-293977)   <memory unit='MiB'>4000</memory>
	I0127 13:03:24.718036  563271 main.go:141] libmachine: (addons-293977)   <vcpu>2</vcpu>
	I0127 13:03:24.718043  563271 main.go:141] libmachine: (addons-293977)   <features>
	I0127 13:03:24.718049  563271 main.go:141] libmachine: (addons-293977)     <acpi/>
	I0127 13:03:24.718057  563271 main.go:141] libmachine: (addons-293977)     <apic/>
	I0127 13:03:24.718072  563271 main.go:141] libmachine: (addons-293977)     <pae/>
	I0127 13:03:24.718088  563271 main.go:141] libmachine: (addons-293977)     
	I0127 13:03:24.718098  563271 main.go:141] libmachine: (addons-293977)   </features>
	I0127 13:03:24.718108  563271 main.go:141] libmachine: (addons-293977)   <cpu mode='host-passthrough'>
	I0127 13:03:24.718126  563271 main.go:141] libmachine: (addons-293977)   
	I0127 13:03:24.718135  563271 main.go:141] libmachine: (addons-293977)   </cpu>
	I0127 13:03:24.718142  563271 main.go:141] libmachine: (addons-293977)   <os>
	I0127 13:03:24.718149  563271 main.go:141] libmachine: (addons-293977)     <type>hvm</type>
	I0127 13:03:24.718158  563271 main.go:141] libmachine: (addons-293977)     <boot dev='cdrom'/>
	I0127 13:03:24.718164  563271 main.go:141] libmachine: (addons-293977)     <boot dev='hd'/>
	I0127 13:03:24.718173  563271 main.go:141] libmachine: (addons-293977)     <bootmenu enable='no'/>
	I0127 13:03:24.718183  563271 main.go:141] libmachine: (addons-293977)   </os>
	I0127 13:03:24.718193  563271 main.go:141] libmachine: (addons-293977)   <devices>
	I0127 13:03:24.718201  563271 main.go:141] libmachine: (addons-293977)     <disk type='file' device='cdrom'>
	I0127 13:03:24.718216  563271 main.go:141] libmachine: (addons-293977)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/boot2docker.iso'/>
	I0127 13:03:24.718226  563271 main.go:141] libmachine: (addons-293977)       <target dev='hdc' bus='scsi'/>
	I0127 13:03:24.718233  563271 main.go:141] libmachine: (addons-293977)       <readonly/>
	I0127 13:03:24.718250  563271 main.go:141] libmachine: (addons-293977)     </disk>
	I0127 13:03:24.718260  563271 main.go:141] libmachine: (addons-293977)     <disk type='file' device='disk'>
	I0127 13:03:24.718270  563271 main.go:141] libmachine: (addons-293977)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 13:03:24.718283  563271 main.go:141] libmachine: (addons-293977)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/addons-293977.rawdisk'/>
	I0127 13:03:24.718295  563271 main.go:141] libmachine: (addons-293977)       <target dev='hda' bus='virtio'/>
	I0127 13:03:24.718311  563271 main.go:141] libmachine: (addons-293977)     </disk>
	I0127 13:03:24.718323  563271 main.go:141] libmachine: (addons-293977)     <interface type='network'>
	I0127 13:03:24.718334  563271 main.go:141] libmachine: (addons-293977)       <source network='mk-addons-293977'/>
	I0127 13:03:24.718342  563271 main.go:141] libmachine: (addons-293977)       <model type='virtio'/>
	I0127 13:03:24.718352  563271 main.go:141] libmachine: (addons-293977)     </interface>
	I0127 13:03:24.718361  563271 main.go:141] libmachine: (addons-293977)     <interface type='network'>
	I0127 13:03:24.718371  563271 main.go:141] libmachine: (addons-293977)       <source network='default'/>
	I0127 13:03:24.718392  563271 main.go:141] libmachine: (addons-293977)       <model type='virtio'/>
	I0127 13:03:24.718406  563271 main.go:141] libmachine: (addons-293977)     </interface>
	I0127 13:03:24.718432  563271 main.go:141] libmachine: (addons-293977)     <serial type='pty'>
	I0127 13:03:24.718451  563271 main.go:141] libmachine: (addons-293977)       <target port='0'/>
	I0127 13:03:24.718458  563271 main.go:141] libmachine: (addons-293977)     </serial>
	I0127 13:03:24.718467  563271 main.go:141] libmachine: (addons-293977)     <console type='pty'>
	I0127 13:03:24.718483  563271 main.go:141] libmachine: (addons-293977)       <target type='serial' port='0'/>
	I0127 13:03:24.718490  563271 main.go:141] libmachine: (addons-293977)     </console>
	I0127 13:03:24.718496  563271 main.go:141] libmachine: (addons-293977)     <rng model='virtio'>
	I0127 13:03:24.718504  563271 main.go:141] libmachine: (addons-293977)       <backend model='random'>/dev/random</backend>
	I0127 13:03:24.718509  563271 main.go:141] libmachine: (addons-293977)     </rng>
	I0127 13:03:24.718516  563271 main.go:141] libmachine: (addons-293977)     
	I0127 13:03:24.718521  563271 main.go:141] libmachine: (addons-293977)     
	I0127 13:03:24.718525  563271 main.go:141] libmachine: (addons-293977)   </devices>
	I0127 13:03:24.718530  563271 main.go:141] libmachine: (addons-293977) </domain>
	I0127 13:03:24.718536  563271 main.go:141] libmachine: (addons-293977) 
	I0127 13:03:24.722592  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:d6:15:b4 in network default
	I0127 13:03:24.723191  563271 main.go:141] libmachine: (addons-293977) starting domain...
	I0127 13:03:24.723210  563271 main.go:141] libmachine: (addons-293977) ensuring networks are active...
	I0127 13:03:24.723219  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:24.723846  563271 main.go:141] libmachine: (addons-293977) Ensuring network default is active
	I0127 13:03:24.724164  563271 main.go:141] libmachine: (addons-293977) Ensuring network mk-addons-293977 is active
	I0127 13:03:24.724601  563271 main.go:141] libmachine: (addons-293977) getting domain XML...
	I0127 13:03:24.725297  563271 main.go:141] libmachine: (addons-293977) creating domain...
	I0127 13:03:25.043744  563271 main.go:141] libmachine: (addons-293977) waiting for IP...
	I0127 13:03:25.044559  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:25.044891  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:25.044966  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:25.044902  563293 retry.go:31] will retry after 276.870611ms: waiting for domain to come up
	I0127 13:03:25.323325  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:25.323784  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:25.323814  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:25.323743  563293 retry.go:31] will retry after 240.259897ms: waiting for domain to come up
	I0127 13:03:25.565236  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:25.565712  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:25.565766  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:25.565698  563293 retry.go:31] will retry after 472.536336ms: waiting for domain to come up
	I0127 13:03:26.039263  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:26.039659  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:26.039713  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:26.039655  563293 retry.go:31] will retry after 433.361393ms: waiting for domain to come up
	I0127 13:03:26.474203  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:26.474685  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:26.474708  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:26.474630  563293 retry.go:31] will retry after 682.342918ms: waiting for domain to come up
	I0127 13:03:27.158702  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:27.159198  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:27.159234  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:27.159140  563293 retry.go:31] will retry after 910.358711ms: waiting for domain to come up
	I0127 13:03:28.071231  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:28.071640  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:28.071674  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:28.071604  563293 retry.go:31] will retry after 1.025399778s: waiting for domain to come up
	I0127 13:03:29.098324  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:29.098773  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:29.098807  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:29.098718  563293 retry.go:31] will retry after 1.020245097s: waiting for domain to come up
	I0127 13:03:30.121018  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:30.121494  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:30.121541  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:30.121472  563293 retry.go:31] will retry after 1.700288324s: waiting for domain to come up
	I0127 13:03:31.823765  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:31.824161  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:31.824208  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:31.824122  563293 retry.go:31] will retry after 1.985184142s: waiting for domain to come up
	I0127 13:03:33.812285  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:33.812797  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:33.812825  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:33.812765  563293 retry.go:31] will retry after 1.923860527s: waiting for domain to come up
	I0127 13:03:35.737902  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:35.738367  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:35.738431  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:35.738343  563293 retry.go:31] will retry after 2.386739332s: waiting for domain to come up
	I0127 13:03:38.126414  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:38.126862  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:38.126886  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:38.126848  563293 retry.go:31] will retry after 3.504150836s: waiting for domain to come up
	I0127 13:03:41.633883  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:41.634261  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find current IP address of domain addons-293977 in network mk-addons-293977
	I0127 13:03:41.634294  563271 main.go:141] libmachine: (addons-293977) DBG | I0127 13:03:41.634239  563293 retry.go:31] will retry after 3.44581015s: waiting for domain to come up
	I0127 13:03:45.081223  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:45.081733  563271 main.go:141] libmachine: (addons-293977) found domain IP: 192.168.39.12
	I0127 13:03:45.081763  563271 main.go:141] libmachine: (addons-293977) reserving static IP address...
	I0127 13:03:45.081777  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has current primary IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:45.082166  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find host DHCP lease matching {name: "addons-293977", mac: "52:54:00:78:66:86", ip: "192.168.39.12"} in network mk-addons-293977
	I0127 13:03:45.155170  563271 main.go:141] libmachine: (addons-293977) reserved static IP address 192.168.39.12 for domain addons-293977
	I0127 13:03:45.155209  563271 main.go:141] libmachine: (addons-293977) DBG | Getting to WaitForSSH function...
	I0127 13:03:45.155218  563271 main.go:141] libmachine: (addons-293977) waiting for SSH...
	I0127 13:03:45.157797  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:45.158123  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977
	I0127 13:03:45.158155  563271 main.go:141] libmachine: (addons-293977) DBG | unable to find defined IP address of network mk-addons-293977 interface with MAC address 52:54:00:78:66:86
	I0127 13:03:45.158294  563271 main.go:141] libmachine: (addons-293977) DBG | Using SSH client type: external
	I0127 13:03:45.158354  563271 main.go:141] libmachine: (addons-293977) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa (-rw-------)
	I0127 13:03:45.158412  563271 main.go:141] libmachine: (addons-293977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:03:45.158428  563271 main.go:141] libmachine: (addons-293977) DBG | About to run SSH command:
	I0127 13:03:45.158448  563271 main.go:141] libmachine: (addons-293977) DBG | exit 0
	I0127 13:03:45.162262  563271 main.go:141] libmachine: (addons-293977) DBG | SSH cmd err, output: exit status 255: 
	I0127 13:03:45.162284  563271 main.go:141] libmachine: (addons-293977) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 13:03:45.162292  563271 main.go:141] libmachine: (addons-293977) DBG | command : exit 0
	I0127 13:03:45.162297  563271 main.go:141] libmachine: (addons-293977) DBG | err     : exit status 255
	I0127 13:03:45.162303  563271 main.go:141] libmachine: (addons-293977) DBG | output  : 
	I0127 13:03:48.162504  563271 main.go:141] libmachine: (addons-293977) DBG | Getting to WaitForSSH function...
	I0127 13:03:48.164920  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.165244  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.165276  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.165390  563271 main.go:141] libmachine: (addons-293977) DBG | Using SSH client type: external
	I0127 13:03:48.165425  563271 main.go:141] libmachine: (addons-293977) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa (-rw-------)
	I0127 13:03:48.165466  563271 main.go:141] libmachine: (addons-293977) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:03:48.165490  563271 main.go:141] libmachine: (addons-293977) DBG | About to run SSH command:
	I0127 13:03:48.165505  563271 main.go:141] libmachine: (addons-293977) DBG | exit 0
	I0127 13:03:48.293077  563271 main.go:141] libmachine: (addons-293977) DBG | SSH cmd err, output: <nil>: 
	I0127 13:03:48.293327  563271 main.go:141] libmachine: (addons-293977) KVM machine creation complete
	I0127 13:03:48.293565  563271 main.go:141] libmachine: (addons-293977) Calling .GetConfigRaw
	I0127 13:03:48.294286  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:48.294498  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:48.294694  563271 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 13:03:48.294713  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:03:48.296089  563271 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 13:03:48.296121  563271 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 13:03:48.296129  563271 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 13:03:48.296146  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.299723  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.300156  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.300185  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.300327  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:48.300523  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.300678  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.300856  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:48.301037  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:48.301223  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:48.301234  563271 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 13:03:48.416351  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:03:48.416375  563271 main.go:141] libmachine: Detecting the provisioner...
	I0127 13:03:48.416382  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.418639  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.418971  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.418990  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.419112  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:48.419286  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.419454  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.419601  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:48.419751  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:48.419907  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:48.419917  563271 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 13:03:48.529705  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 13:03:48.529806  563271 main.go:141] libmachine: found compatible host: buildroot
	I0127 13:03:48.529826  563271 main.go:141] libmachine: Provisioning with buildroot...
	I0127 13:03:48.529836  563271 main.go:141] libmachine: (addons-293977) Calling .GetMachineName
	I0127 13:03:48.530006  563271 buildroot.go:166] provisioning hostname "addons-293977"
	I0127 13:03:48.530028  563271 main.go:141] libmachine: (addons-293977) Calling .GetMachineName
	I0127 13:03:48.530206  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.532515  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.532865  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.532895  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.532970  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:48.533137  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.533288  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.533424  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:48.533563  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:48.533747  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:48.533759  563271 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-293977 && echo "addons-293977" | sudo tee /etc/hostname
	I0127 13:03:48.658748  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-293977
	
	I0127 13:03:48.658773  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.661405  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.661761  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.661787  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.661960  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:48.662146  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.662323  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.662447  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:48.662577  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:48.662745  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:48.662761  563271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-293977' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-293977/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-293977' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:03:48.781679  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:03:48.781720  563271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 13:03:48.781773  563271 buildroot.go:174] setting up certificates
	I0127 13:03:48.781788  563271 provision.go:84] configureAuth start
	I0127 13:03:48.781804  563271 main.go:141] libmachine: (addons-293977) Calling .GetMachineName
	I0127 13:03:48.782054  563271 main.go:141] libmachine: (addons-293977) Calling .GetIP
	I0127 13:03:48.784576  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.784934  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.784965  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.785094  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.787315  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.787592  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.787619  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.787702  563271 provision.go:143] copyHostCerts
	I0127 13:03:48.787787  563271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 13:03:48.787991  563271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 13:03:48.788091  563271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 13:03:48.788184  563271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.addons-293977 san=[127.0.0.1 192.168.39.12 addons-293977 localhost minikube]
	I0127 13:03:48.850835  563271 provision.go:177] copyRemoteCerts
	I0127 13:03:48.850887  563271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:03:48.850912  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:48.852770  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.853088  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:48.853119  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:48.853222  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:48.853388  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:48.853547  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:48.853710  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:03:48.938949  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:03:48.961886  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 13:03:48.984494  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:03:49.007200  563271 provision.go:87] duration metric: took 225.396617ms to configureAuth
	I0127 13:03:49.007225  563271 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:03:49.007416  563271 config.go:182] Loaded profile config "addons-293977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:03:49.007502  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:49.010260  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.010598  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.010627  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.010772  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:49.010952  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.011142  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.011255  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:49.011404  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:49.011580  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:49.011595  563271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:03:49.232846  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:03:49.232871  563271 main.go:141] libmachine: Checking connection to Docker...
	I0127 13:03:49.232879  563271 main.go:141] libmachine: (addons-293977) Calling .GetURL
	I0127 13:03:49.234116  563271 main.go:141] libmachine: (addons-293977) DBG | using libvirt version 6000000
	I0127 13:03:49.235901  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.236260  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.236289  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.236511  563271 main.go:141] libmachine: Docker is up and running!
	I0127 13:03:49.236525  563271 main.go:141] libmachine: Reticulating splines...
	I0127 13:03:49.236533  563271 client.go:171] duration metric: took 25.204593737s to LocalClient.Create
	I0127 13:03:49.236558  563271 start.go:167] duration metric: took 25.204660753s to libmachine.API.Create "addons-293977"
	I0127 13:03:49.236568  563271 start.go:293] postStartSetup for "addons-293977" (driver="kvm2")
	I0127 13:03:49.236577  563271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:03:49.236605  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:49.236820  563271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:03:49.236848  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:49.238798  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.239115  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.239142  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.239248  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:49.239405  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.239564  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:49.239669  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:03:49.327021  563271 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:03:49.331131  563271 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:03:49.331157  563271 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 13:03:49.331237  563271 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 13:03:49.331272  563271 start.go:296] duration metric: took 94.698821ms for postStartSetup
	I0127 13:03:49.331315  563271 main.go:141] libmachine: (addons-293977) Calling .GetConfigRaw
	I0127 13:03:49.331854  563271 main.go:141] libmachine: (addons-293977) Calling .GetIP
	I0127 13:03:49.334342  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.334649  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.334677  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.334873  563271 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/config.json ...
	I0127 13:03:49.335034  563271 start.go:128] duration metric: took 25.319434493s to createHost
	I0127 13:03:49.335056  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:49.337199  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.337500  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.337528  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.337662  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:49.337830  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.337967  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.338082  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:49.338260  563271 main.go:141] libmachine: Using SSH client type: native
	I0127 13:03:49.338435  563271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0127 13:03:49.338445  563271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:03:49.450652  563271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737983029.428410134
	
	I0127 13:03:49.450679  563271 fix.go:216] guest clock: 1737983029.428410134
	I0127 13:03:49.450688  563271 fix.go:229] Guest: 2025-01-27 13:03:49.428410134 +0000 UTC Remote: 2025-01-27 13:03:49.335045813 +0000 UTC m=+25.419624024 (delta=93.364321ms)
	I0127 13:03:49.450712  563271 fix.go:200] guest clock delta is within tolerance: 93.364321ms
	I0127 13:03:49.450721  563271 start.go:83] releasing machines lock for "addons-293977", held for 25.435181632s
	I0127 13:03:49.450747  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:49.450974  563271 main.go:141] libmachine: (addons-293977) Calling .GetIP
	I0127 13:03:49.453309  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.453635  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.453661  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.453830  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:49.454265  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:49.454470  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:03:49.454578  563271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:03:49.454620  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:49.454670  563271 ssh_runner.go:195] Run: cat /version.json
	I0127 13:03:49.454702  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:03:49.457325  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.457510  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.457693  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.457724  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.457843  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:49.458012  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.458021  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:49.458057  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:49.458179  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:49.458253  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:03:49.458368  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:03:49.458418  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:03:49.458553  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:03:49.458692  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:03:49.537679  563271 ssh_runner.go:195] Run: systemctl --version
	I0127 13:03:49.560839  563271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:03:49.713210  563271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:03:49.720129  563271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:03:49.720201  563271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:03:49.735280  563271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:03:49.735297  563271 start.go:495] detecting cgroup driver to use...
	I0127 13:03:49.735347  563271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:03:49.750857  563271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:03:49.763301  563271 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:03:49.763338  563271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:03:49.775747  563271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:03:49.788127  563271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:03:49.903281  563271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:03:50.064557  563271 docker.go:233] disabling docker service ...
	I0127 13:03:50.064622  563271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:03:50.077466  563271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:03:50.089432  563271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:03:50.202374  563271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:03:50.315177  563271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:03:50.328699  563271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:03:50.346459  563271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:03:50.346526  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.356312  563271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:03:50.356373  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.365971  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.375672  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.385290  563271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:03:50.395150  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.404741  563271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.420969  563271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:03:50.430516  563271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:03:50.439292  563271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:03:50.439331  563271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:03:50.451138  563271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:03:50.459979  563271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:03:50.573731  563271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:03:50.660239  563271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:03:50.660321  563271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:03:50.664999  563271 start.go:563] Will wait 60s for crictl version
	I0127 13:03:50.665063  563271 ssh_runner.go:195] Run: which crictl
	I0127 13:03:50.669129  563271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:03:50.713093  563271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:03:50.713203  563271 ssh_runner.go:195] Run: crio --version
	I0127 13:03:50.741375  563271 ssh_runner.go:195] Run: crio --version
	I0127 13:03:50.769546  563271 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:03:50.770799  563271 main.go:141] libmachine: (addons-293977) Calling .GetIP
	I0127 13:03:50.773663  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:50.773991  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:03:50.774013  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:03:50.774299  563271 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:03:50.778066  563271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:03:50.790017  563271 kubeadm.go:883] updating cluster {Name:addons-293977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-293977 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:03:50.790128  563271 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:03:50.790171  563271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:03:50.820276  563271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:03:50.820348  563271 ssh_runner.go:195] Run: which lz4
	I0127 13:03:50.823890  563271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:03:50.827789  563271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:03:50.827835  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:03:52.119012  563271 crio.go:462] duration metric: took 1.295144045s to copy over tarball
	I0127 13:03:52.119099  563271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:03:54.271383  563271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15223836s)
	I0127 13:03:54.271426  563271 crio.go:469] duration metric: took 2.152381926s to extract the tarball
	I0127 13:03:54.271438  563271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:03:54.310270  563271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:03:54.350773  563271 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:03:54.350807  563271 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:03:54.350818  563271 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.32.1 crio true true} ...
	I0127 13:03:54.350978  563271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-293977 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-293977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:03:54.351085  563271 ssh_runner.go:195] Run: crio config
	I0127 13:03:54.397995  563271 cni.go:84] Creating CNI manager for ""
	I0127 13:03:54.398022  563271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:03:54.398035  563271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:03:54.398064  563271 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-293977 NodeName:addons-293977 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:03:54.398235  563271 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-293977"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.12"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:03:54.398340  563271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:03:54.408409  563271 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:03:54.408477  563271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:03:54.417590  563271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 13:03:54.433817  563271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:03:54.449494  563271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 13:03:54.465354  563271 ssh_runner.go:195] Run: grep 192.168.39.12	control-plane.minikube.internal$ /etc/hosts
	I0127 13:03:54.469148  563271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:03:54.480701  563271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:03:54.593682  563271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:03:54.609714  563271 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977 for IP: 192.168.39.12
	I0127 13:03:54.609737  563271 certs.go:194] generating shared ca certs ...
	I0127 13:03:54.609760  563271 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:54.609945  563271 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 13:03:54.697574  563271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt ...
	I0127 13:03:54.697621  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt: {Name:mk79f6f7034efe8b10a47aaa37e821a44f9e87ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:54.697848  563271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key ...
	I0127 13:03:54.697868  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key: {Name:mk7cb0b40639f38508e8563f4973f292c26ac19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:54.697989  563271 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 13:03:54.793830  563271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt ...
	I0127 13:03:54.793856  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt: {Name:mk86f4e9be203d31834c7a5e608d78ede1340c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:54.794051  563271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key ...
	I0127 13:03:54.794073  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key: {Name:mk773a4e591bfced232619345823dcb7d102f8f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:54.794200  563271 certs.go:256] generating profile certs ...
	I0127 13:03:54.794266  563271 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.key
	I0127 13:03:54.794286  563271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt with IP's: []
	I0127 13:03:55.182741  563271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt ...
	I0127 13:03:55.182775  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: {Name:mk902cfd57430777a16201892417018fa51ad507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.182996  563271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.key ...
	I0127 13:03:55.183014  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.key: {Name:mkb6eeb0220442387edc1eecb61339e9410873ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.183127  563271 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key.48b04395
	I0127 13:03:55.183149  563271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt.48b04395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12]
	I0127 13:03:55.600173  563271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt.48b04395 ...
	I0127 13:03:55.600215  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt.48b04395: {Name:mka7ded44e42f2839884863d9c46f5473b035241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.600405  563271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key.48b04395 ...
	I0127 13:03:55.600425  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key.48b04395: {Name:mk83c9ed68aed2b43402a771ae7509431c04d9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.600530  563271 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt.48b04395 -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt
	I0127 13:03:55.600640  563271 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key.48b04395 -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key
	I0127 13:03:55.600712  563271 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.key
	I0127 13:03:55.600741  563271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.crt with IP's: []
	I0127 13:03:55.777727  563271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.crt ...
	I0127 13:03:55.777763  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.crt: {Name:mkc55288e89cbf33fed52ef4a3190dffbfac7421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.777924  563271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.key ...
	I0127 13:03:55.777941  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.key: {Name:mk9bab8403302b3ecedbcfb05b8eb6dacbe91de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:03:55.778137  563271 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 13:03:55.778188  563271 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:03:55.778224  563271 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:03:55.778260  563271 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 13:03:55.778928  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:03:55.811710  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:03:55.835994  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:03:55.859115  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:03:55.882160  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 13:03:55.906087  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:03:55.929998  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:03:55.955089  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:03:55.991218  563271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:03:56.027560  563271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:03:56.043735  563271 ssh_runner.go:195] Run: openssl version
	I0127 13:03:56.049168  563271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:03:56.059690  563271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:03:56.063872  563271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:03:56.063920  563271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:03:56.069550  563271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:03:56.080443  563271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:03:56.084276  563271 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 13:03:56.084329  563271 kubeadm.go:392] StartCluster: {Name:addons-293977 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-293977 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:03:56.084418  563271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:03:56.084478  563271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:03:56.125888  563271 cri.go:89] found id: ""
	I0127 13:03:56.125969  563271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:03:56.136720  563271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:03:56.147508  563271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:03:56.156842  563271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:03:56.156864  563271 kubeadm.go:157] found existing configuration files:
	
	I0127 13:03:56.156904  563271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:03:56.165719  563271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:03:56.165786  563271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:03:56.174823  563271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:03:56.183903  563271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:03:56.183952  563271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:03:56.192925  563271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:03:56.201962  563271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:03:56.201999  563271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:03:56.213479  563271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:03:56.222359  563271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:03:56.222427  563271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:03:56.232596  563271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:03:56.385991  563271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:04:06.544916  563271 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:04:06.544985  563271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:04:06.545060  563271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:04:06.545207  563271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:04:06.545322  563271 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:04:06.545413  563271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:04:06.546896  563271 out.go:235]   - Generating certificates and keys ...
	I0127 13:04:06.546966  563271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:04:06.547038  563271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:04:06.547131  563271 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 13:04:06.547217  563271 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 13:04:06.547306  563271 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 13:04:06.547379  563271 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 13:04:06.547473  563271 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 13:04:06.547578  563271 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-293977 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0127 13:04:06.547623  563271 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 13:04:06.547726  563271 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-293977 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0127 13:04:06.547785  563271 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 13:04:06.547842  563271 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 13:04:06.547884  563271 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 13:04:06.547942  563271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:04:06.548026  563271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:04:06.548111  563271 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:04:06.548199  563271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:04:06.548294  563271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:04:06.548378  563271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:04:06.548515  563271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:04:06.548620  563271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:04:06.550694  563271 out.go:235]   - Booting up control plane ...
	I0127 13:04:06.550817  563271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:04:06.550887  563271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:04:06.550944  563271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:04:06.551080  563271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:04:06.551206  563271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:04:06.551263  563271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:04:06.551440  563271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:04:06.551598  563271 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:04:06.551674  563271 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.019155ms
	I0127 13:04:06.551764  563271 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:04:06.551851  563271 kubeadm.go:310] [api-check] The API server is healthy after 5.502295852s
	I0127 13:04:06.551950  563271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:04:06.552119  563271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:04:06.552183  563271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:04:06.552370  563271 kubeadm.go:310] [mark-control-plane] Marking the node addons-293977 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:04:06.552421  563271 kubeadm.go:310] [bootstrap-token] Using token: 6d2iez.5cisid4ywsec3dxn
	I0127 13:04:06.553664  563271 out.go:235]   - Configuring RBAC rules ...
	I0127 13:04:06.553766  563271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:04:06.553840  563271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:04:06.553980  563271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:04:06.554105  563271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:04:06.554205  563271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:04:06.554280  563271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:04:06.554381  563271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:04:06.554420  563271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:04:06.554461  563271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:04:06.554472  563271 kubeadm.go:310] 
	I0127 13:04:06.554527  563271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:04:06.554534  563271 kubeadm.go:310] 
	I0127 13:04:06.554611  563271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:04:06.554621  563271 kubeadm.go:310] 
	I0127 13:04:06.554644  563271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:04:06.554694  563271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:04:06.554744  563271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:04:06.554752  563271 kubeadm.go:310] 
	I0127 13:04:06.554808  563271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:04:06.554817  563271 kubeadm.go:310] 
	I0127 13:04:06.554859  563271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:04:06.554869  563271 kubeadm.go:310] 
	I0127 13:04:06.554916  563271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:04:06.554994  563271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:04:06.555053  563271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:04:06.555062  563271 kubeadm.go:310] 
	I0127 13:04:06.555135  563271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:04:06.555210  563271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:04:06.555216  563271 kubeadm.go:310] 
	I0127 13:04:06.555284  563271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6d2iez.5cisid4ywsec3dxn \
	I0127 13:04:06.555374  563271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 \
	I0127 13:04:06.555394  563271 kubeadm.go:310] 	--control-plane 
	I0127 13:04:06.555400  563271 kubeadm.go:310] 
	I0127 13:04:06.555468  563271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:04:06.555478  563271 kubeadm.go:310] 
	I0127 13:04:06.555550  563271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6d2iez.5cisid4ywsec3dxn \
	I0127 13:04:06.555658  563271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 
	I0127 13:04:06.555669  563271 cni.go:84] Creating CNI manager for ""
	I0127 13:04:06.555676  563271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:04:06.557598  563271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:04:06.558666  563271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:04:06.570778  563271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:04:06.592181  563271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:04:06.592314  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:06.592335  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-293977 minikube.k8s.io/updated_at=2025_01_27T13_04_06_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=addons-293977 minikube.k8s.io/primary=true
	I0127 13:04:06.612851  563271 ops.go:34] apiserver oom_adj: -16
	I0127 13:04:06.773180  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:07.273704  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:07.773670  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:08.273885  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:08.774288  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:09.274257  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:09.774226  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:10.273816  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:10.773630  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:11.273523  563271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:04:11.369636  563271 kubeadm.go:1113] duration metric: took 4.777378884s to wait for elevateKubeSystemPrivileges
	I0127 13:04:11.369690  563271 kubeadm.go:394] duration metric: took 15.285369976s to StartCluster
	I0127 13:04:11.369717  563271 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:04:11.369880  563271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:04:11.370521  563271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:04:11.370779  563271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 13:04:11.370811  563271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:04:11.370904  563271 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 13:04:11.371041  563271 addons.go:69] Setting yakd=true in profile "addons-293977"
	I0127 13:04:11.371058  563271 addons.go:69] Setting inspektor-gadget=true in profile "addons-293977"
	I0127 13:04:11.371078  563271 addons.go:238] Setting addon yakd=true in "addons-293977"
	I0127 13:04:11.371091  563271 addons.go:238] Setting addon inspektor-gadget=true in "addons-293977"
	I0127 13:04:11.371134  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371145  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371147  563271 addons.go:69] Setting storage-provisioner=true in profile "addons-293977"
	I0127 13:04:11.371200  563271 addons.go:69] Setting cloud-spanner=true in profile "addons-293977"
	I0127 13:04:11.371209  563271 addons.go:69] Setting gcp-auth=true in profile "addons-293977"
	I0127 13:04:11.371214  563271 addons.go:238] Setting addon storage-provisioner=true in "addons-293977"
	I0127 13:04:11.371221  563271 addons.go:238] Setting addon cloud-spanner=true in "addons-293977"
	I0127 13:04:11.371250  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371255  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371264  563271 mustload.go:65] Loading cluster: addons-293977
	I0127 13:04:11.371434  563271 config.go:182] Loaded profile config "addons-293977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:04:11.371041  563271 config.go:182] Loaded profile config "addons-293977": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:04:11.371178  563271 addons.go:69] Setting metrics-server=true in profile "addons-293977"
	I0127 13:04:11.371637  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.371640  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.371654  563271 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-293977"
	I0127 13:04:11.371669  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.371673  563271 addons.go:69] Setting ingress-dns=true in profile "addons-293977"
	I0127 13:04:11.371680  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371683  563271 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-293977"
	I0127 13:04:11.371689  563271 addons.go:238] Setting addon ingress-dns=true in "addons-293977"
	I0127 13:04:11.371696  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371703  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371726  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.371158  563271 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-293977"
	I0127 13:04:11.371762  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.371768  563271 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-293977"
	I0127 13:04:11.371648  563271 addons.go:238] Setting addon metrics-server=true in "addons-293977"
	I0127 13:04:11.371186  563271 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-293977"
	I0127 13:04:11.371785  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371824  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.371840  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371202  563271 addons.go:69] Setting ingress=true in profile "addons-293977"
	I0127 13:04:11.371869  563271 addons.go:238] Setting addon ingress=true in "addons-293977"
	I0127 13:04:11.371763  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371907  563271 addons.go:69] Setting volcano=true in profile "addons-293977"
	I0127 13:04:11.371925  563271 addons.go:238] Setting addon volcano=true in "addons-293977"
	I0127 13:04:11.371196  563271 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-293977"
	I0127 13:04:11.371947  563271 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-293977"
	I0127 13:04:11.371172  563271 addons.go:69] Setting registry=true in profile "addons-293977"
	I0127 13:04:11.371960  563271 addons.go:238] Setting addon registry=true in "addons-293977"
	I0127 13:04:11.371994  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372034  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372048  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372069  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371193  563271 addons.go:69] Setting volumesnapshots=true in profile "addons-293977"
	I0127 13:04:11.371820  563271 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-293977"
	I0127 13:04:11.372136  563271 addons.go:238] Setting addon volumesnapshots=true in "addons-293977"
	I0127 13:04:11.372116  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372163  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.371192  563271 addons.go:69] Setting default-storageclass=true in profile "addons-293977"
	I0127 13:04:11.372196  563271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-293977"
	I0127 13:04:11.372206  563271 out.go:177] * Verifying Kubernetes components...
	I0127 13:04:11.372263  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372366  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372414  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372443  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.372499  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372523  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.372582  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372637  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372659  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.372728  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372763  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.372848  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.372947  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.372981  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.372239  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.373490  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.373521  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.374057  563271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:04:11.391683  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0127 13:04:11.391682  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0127 13:04:11.391847  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0127 13:04:11.391949  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0127 13:04:11.391957  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I0127 13:04:11.392175  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.392248  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.392353  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.392367  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.392707  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.392725  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.392793  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.392846  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.392871  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.392884  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.393237  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.393243  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.393263  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.393285  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.393299  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.393309  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.393325  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.393650  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.393728  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.393816  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.402074  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.402154  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.402314  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.402367  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.402556  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.402074  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.402630  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.402738  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.402782  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.402926  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.402959  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.403951  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.403992  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.409444  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.409843  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.409882  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.432996  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I0127 13:04:11.434012  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.434767  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.434790  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.435271  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.435933  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.435965  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.438602  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0127 13:04:11.439135  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.439776  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.439794  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.440205  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.440460  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.440901  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0127 13:04:11.441449  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.442048  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.442070  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.442495  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.442761  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45391
	I0127 13:04:11.442891  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0127 13:04:11.442946  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.443025  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0127 13:04:11.443066  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0127 13:04:11.443455  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.443599  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.443766  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36135
	I0127 13:04:11.444114  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.444133  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.444259  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.444270  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.444596  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.444682  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.445135  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.445138  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36095
	I0127 13:04:11.445176  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.445671  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.445711  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.445721  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0127 13:04:11.446070  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.446153  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.446291  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.446314  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.446729  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.446777  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.447001  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.447196  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.447220  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.447321  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.447353  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.447377  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.447917  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.447974  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.448040  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.448359  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.448394  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.448997  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.449041  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.449408  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.449994  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 13:04:11.450028  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.450066  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.450145  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42163
	I0127 13:04:11.450768  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.450804  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.451910  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.451912  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 13:04:11.452791  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.453202  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.453826  563271 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 13:04:11.453873  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 13:04:11.454866  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.455504  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.455529  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.456150  563271 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 13:04:11.456205  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 13:04:11.456221  563271 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 13:04:11.457513  563271 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-293977"
	I0127 13:04:11.457556  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.457771  563271 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 13:04:11.457788  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 13:04:11.457806  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.457984  563271 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 13:04:11.457924  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.457996  563271 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 13:04:11.458021  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.458090  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.459439  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 13:04:11.459620  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.460177  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.460218  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.460589  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.461177  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.461195  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.461639  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.462312  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.462351  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.462410  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 13:04:11.463504  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.463959  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.464213  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.464112  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.464460  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.464612  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.464730  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.465208  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 13:04:11.465366  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0127 13:04:11.466148  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.466770  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.467440  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.467463  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.467658  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.467789  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 13:04:11.467910  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.468096  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.468287  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.468931  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46245
	I0127 13:04:11.469215  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 13:04:11.469240  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 13:04:11.469261  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.470692  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.470713  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.471515  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.472391  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.472433  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.473285  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.473533  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.473954  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.473973  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.474217  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.474392  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.474455  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I0127 13:04:11.474739  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.474906  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.475472  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.475488  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.475944  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.476520  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.476557  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.477146  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.477865  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.477884  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.478238  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.478532  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.479991  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0127 13:04:11.480449  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.480943  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.480970  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.481316  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.481704  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.483999  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.484475  563271 addons.go:238] Setting addon default-storageclass=true in "addons-293977"
	I0127 13:04:11.484521  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:11.484870  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.484911  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.485453  563271 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 13:04:11.486664  563271 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 13:04:11.486687  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 13:04:11.486707  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.489865  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.490239  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.490264  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.490503  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.490712  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.490906  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.491098  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.495863  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0127 13:04:11.496540  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.497340  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.497368  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.497851  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.498088  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.498106  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0127 13:04:11.498719  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.498799  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I0127 13:04:11.499443  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.499469  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.499570  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0127 13:04:11.499718  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.500101  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.500335  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.500586  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.501320  563271 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 13:04:11.501512  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.502111  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.502141  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.502205  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.502435  563271 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 13:04:11.502454  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 13:04:11.502472  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.502549  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.503108  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.503164  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.503790  563271 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 13:04:11.505082  563271 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 13:04:11.505101  563271 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 13:04:11.505131  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.505333  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0127 13:04:11.505921  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.506132  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.506151  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.506412  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.506430  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.506761  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.507039  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.507108  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.507784  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0127 13:04:11.508147  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0127 13:04:11.508334  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.508183  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.508498  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.508569  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.508797  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.508897  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.509066  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.509174  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.509245  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.509323  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.509346  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.509566  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:11.509636  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:11.509711  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.510307  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.510448  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.510461  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.510508  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:11.510526  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:11.510533  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:11.510541  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:11.510547  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:11.510605  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.510969  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.511209  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.512101  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:11.512125  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 13:04:11.512219  563271 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 13:04:11.512700  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.513798  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.514921  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.515360  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0127 13:04:11.515648  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0127 13:04:11.515920  563271 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 13:04:11.516325  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.516348  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.516531  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.516779  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.517035  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 13:04:11.517055  563271 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 13:04:11.517078  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.517203  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.517505  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0127 13:04:11.517673  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0127 13:04:11.517689  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0127 13:04:11.517796  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.517870  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.517921  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.518473  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.518491  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.518559  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.518676  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.518745  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.519102  563271 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 13:04:11.519128  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.519339  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.519555  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.519576  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.519295  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.519696  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.519727  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.519744  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.520132  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.520319  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.520342  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.520368  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.520655  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.520748  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.520786  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:11.520801  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.520833  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:11.521188  563271 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 13:04:11.521209  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 13:04:11.521228  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.521298  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.521536  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.522940  563271 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 13:04:11.523326  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.524086  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.524210  563271 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:04:11.524228  563271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:04:11.524249  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.524674  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.525247  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.525979  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.526047  563271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:04:11.525952  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.526963  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.527020  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.527018  563271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 13:04:11.527160  563271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:04:11.527179  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:04:11.527196  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.527290  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.527517  563271 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 13:04:11.527750  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.527868  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.528008  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.528150  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.528767  563271 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 13:04:11.528782  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 13:04:11.528800  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.529197  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.529223  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.529425  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.529719  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.529795  563271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:04:11.530227  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.530409  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.530933  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.531370  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.531399  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.531507  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.531771  563271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:04:11.531995  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.532380  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.532404  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.532424  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.532448  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.532631  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.532681  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.532800  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.533134  563271 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 13:04:11.533154  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 13:04:11.533171  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.533335  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.533380  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.533567  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.534251  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.534336  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.534784  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.534973  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.535308  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.535501  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.535978  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0127 13:04:11.536397  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.536564  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.536895  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.536910  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.537008  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.537031  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.537211  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.537303  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.537352  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.537442  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.537467  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.537544  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.539209  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	W0127 13:04:11.539537  563271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58946->192.168.39.12:22: read: connection reset by peer
	I0127 13:04:11.539565  563271 retry.go:31] will retry after 179.611167ms: ssh: handshake failed: read tcp 192.168.39.1:58946->192.168.39.12:22: read: connection reset by peer
	I0127 13:04:11.540564  563271 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 13:04:11.541343  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0127 13:04:11.541815  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:11.542403  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:11.542422  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:11.542748  563271 out.go:177]   - Using image docker.io/busybox:stable
	I0127 13:04:11.542789  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:11.542987  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:11.543799  563271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 13:04:11.543816  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 13:04:11.543833  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.544725  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:11.544916  563271 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:04:11.544931  563271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:04:11.544947  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:11.547074  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.547325  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.547345  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.547529  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.547708  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.547883  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.548024  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.548271  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.548738  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:11.548769  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:11.548983  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:11.549170  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:11.549287  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:11.549423  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:11.915029  563271 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 13:04:11.915069  563271 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 13:04:11.944030  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 13:04:11.965802  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 13:04:12.018574  563271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 13:04:12.018598  563271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 13:04:12.027413  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 13:04:12.027441  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 13:04:12.045889  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 13:04:12.059955  563271 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 13:04:12.059971  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 13:04:12.064713  563271 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 13:04:12.064734  563271 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 13:04:12.085309  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 13:04:12.089224  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 13:04:12.102935  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 13:04:12.112431  563271 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 13:04:12.112451  563271 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 13:04:12.117447  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:04:12.121807  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:04:12.123285  563271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:04:12.123301  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 13:04:12.178043  563271 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 13:04:12.178075  563271 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 13:04:12.245395  563271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:04:12.245423  563271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:04:12.261134  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 13:04:12.261161  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 13:04:12.271588  563271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 13:04:12.271603  563271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 13:04:12.276265  563271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:04:12.276608  563271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 13:04:12.314238  563271 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 13:04:12.314257  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 13:04:12.344813  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 13:04:12.348220  563271 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 13:04:12.348240  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 13:04:12.419799  563271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:04:12.419825  563271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:04:12.451443  563271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 13:04:12.451473  563271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 13:04:12.507421  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 13:04:12.535601  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 13:04:12.535630  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 13:04:12.554637  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 13:04:12.610764  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:04:12.808802  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 13:04:12.808844  563271 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 13:04:12.899127  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 13:04:12.899161  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 13:04:13.161498  563271 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:04:13.161522  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 13:04:13.185446  563271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 13:04:13.185474  563271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 13:04:13.496551  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:04:13.572068  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 13:04:13.572098  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 13:04:13.778768  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 13:04:13.778796  563271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 13:04:14.105073  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 13:04:14.105100  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 13:04:14.376897  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 13:04:14.376935  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 13:04:14.755008  563271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 13:04:14.755036  563271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 13:04:15.073307  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 13:04:18.335203  563271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 13:04:18.335254  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:18.338485  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:18.338916  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:18.338948  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:18.339157  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:18.339325  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:18.339473  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:18.339596  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:18.957501  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.013428296s)
	I0127 13:04:18.957560  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.957572  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.957568  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.991734361s)
	I0127 13:04:18.957620  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.957638  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.957667  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.911749919s)
	I0127 13:04:18.957703  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.957719  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.957770  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.872439022s)
	I0127 13:04:18.957802  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.957813  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.957825  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.868579159s)
	I0127 13:04:18.957853  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.957862  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.957959  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.957980  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.957991  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.958000  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.958099  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.958108  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958121  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.958130  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.958138  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.958139  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958147  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.958162  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.958168  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.958191  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.958215  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958224  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.958232  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.958239  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.958243  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958253  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.958259  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.958265  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.958224  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.958338  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958349  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.958791  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.958826  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.958833  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.959073  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.959095  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.959101  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.959981  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.959993  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.960003  563271 addons.go:479] Verifying addon ingress=true in "addons-293977"
	I0127 13:04:18.960363  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.960430  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.960437  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.960606  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.857640096s)
	I0127 13:04:18.960667  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.960684  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.960682  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.843213588s)
	I0127 13:04:18.960706  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.960719  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.960790  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.838962758s)
	I0127 13:04:18.960815  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.960827  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.960828  563271 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.684540678s)
	I0127 13:04:18.960897  563271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.684264456s)
	I0127 13:04:18.960936  563271 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 13:04:18.961836  563271 node_ready.go:35] waiting up to 6m0s for node "addons-293977" to be "Ready" ...
	I0127 13:04:18.962104  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.617267115s)
	I0127 13:04:18.962126  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.962135  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.962155  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.454703239s)
	I0127 13:04:18.962180  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.962193  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.962231  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.407551055s)
	I0127 13:04:18.962246  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.962256  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.962351  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.351555896s)
	I0127 13:04:18.962366  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.962374  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963019  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963045  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963069  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963076  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963083  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963092  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963146  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963168  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963182  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963188  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963195  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963255  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963254  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963265  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963273  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963273  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963283  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963293  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963305  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963313  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963318  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963325  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963333  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963337  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963340  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963347  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963354  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963360  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963319  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963406  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963429  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963435  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963436  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963441  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:18.963455  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.963527  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963535  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.963636  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.963680  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.963689  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.964210  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.964248  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.964255  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.964264  563271 addons.go:479] Verifying addon registry=true in "addons-293977"
	I0127 13:04:18.965051  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:18.965126  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.965134  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.965143  563271 addons.go:479] Verifying addon metrics-server=true in "addons-293977"
	I0127 13:04:18.965492  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.965532  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.965544  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.965719  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:18.965723  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.965737  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.965747  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:18.965755  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:18.965938  563271 out.go:177] * Verifying ingress addon...
	I0127 13:04:18.966225  563271 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-293977 service yakd-dashboard -n yakd-dashboard
	
	I0127 13:04:18.966253  563271 out.go:177] * Verifying registry addon...
	I0127 13:04:18.968233  563271 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 13:04:18.968260  563271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 13:04:18.982748  563271 node_ready.go:49] node "addons-293977" has status "Ready":"True"
	I0127 13:04:18.982764  563271 node_ready.go:38] duration metric: took 20.887716ms for node "addons-293977" to be "Ready" ...
	I0127 13:04:18.982772  563271 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:04:18.991024  563271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 13:04:18.991045  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:18.993271  563271 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 13:04:18.993286  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:19.024124  563271 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:19.042589  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:19.042608  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:19.042826  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:19.042845  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:19.042862  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	W0127 13:04:19.042940  563271 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0127 13:04:19.071241  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:19.071258  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:19.071605  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:19.071622  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:19.180902  563271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 13:04:19.360566  563271 addons.go:238] Setting addon gcp-auth=true in "addons-293977"
	I0127 13:04:19.360638  563271 host.go:66] Checking if "addons-293977" exists ...
	I0127 13:04:19.360976  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:19.361028  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:19.376515  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0127 13:04:19.376874  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:19.377508  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:19.377529  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:19.377909  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:19.378554  563271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:04:19.378616  563271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:04:19.393256  563271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0127 13:04:19.393772  563271 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:04:19.394237  563271 main.go:141] libmachine: Using API Version  1
	I0127 13:04:19.394256  563271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:04:19.394670  563271 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:04:19.394843  563271 main.go:141] libmachine: (addons-293977) Calling .GetState
	I0127 13:04:19.396562  563271 main.go:141] libmachine: (addons-293977) Calling .DriverName
	I0127 13:04:19.396758  563271 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 13:04:19.396781  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHHostname
	I0127 13:04:19.399239  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:19.399604  563271 main.go:141] libmachine: (addons-293977) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:66:86", ip: ""} in network mk-addons-293977: {Iface:virbr1 ExpiryTime:2025-01-27 14:03:38 +0000 UTC Type:0 Mac:52:54:00:78:66:86 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:addons-293977 Clientid:01:52:54:00:78:66:86}
	I0127 13:04:19.399633  563271 main.go:141] libmachine: (addons-293977) DBG | domain addons-293977 has defined IP address 192.168.39.12 and MAC address 52:54:00:78:66:86 in network mk-addons-293977
	I0127 13:04:19.399788  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHPort
	I0127 13:04:19.399964  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHKeyPath
	I0127 13:04:19.400126  563271 main.go:141] libmachine: (addons-293977) Calling .GetSSHUsername
	I0127 13:04:19.400269  563271 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/addons-293977/id_rsa Username:docker}
	I0127 13:04:19.484034  563271 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-293977" context rescaled to 1 replicas
	I0127 13:04:19.498032  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:19.498046  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:19.782079  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.285472717s)
	W0127 13:04:19.782179  563271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 13:04:19.782233  563271 retry.go:31] will retry after 300.183956ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 13:04:19.984481  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:19.985034  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:20.083364  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 13:04:20.501152  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:20.501487  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:21.024441  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:21.024829  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:21.122560  563271 pod_ready.go:103] pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:21.144964  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.071591957s)
	I0127 13:04:21.144998  563271 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.748218391s)
	I0127 13:04:21.145031  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:21.145055  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:21.145448  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:21.145471  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:21.145488  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:21.145497  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:21.145506  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:21.145837  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:21.145898  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:21.145924  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:21.145936  563271 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-293977"
	I0127 13:04:21.146812  563271 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 13:04:21.147748  563271 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 13:04:21.149499  563271 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 13:04:21.150264  563271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 13:04:21.150739  563271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 13:04:21.150756  563271 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 13:04:21.200907  563271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 13:04:21.200941  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:21.250677  563271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 13:04:21.250706  563271 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 13:04:21.355910  563271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 13:04:21.355936  563271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 13:04:21.473636  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:21.473978  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:21.517765  563271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 13:04:21.655683  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:21.974134  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:21.974728  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:22.139189  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.055763932s)
	I0127 13:04:22.139261  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:22.139283  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:22.139680  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:22.139705  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:22.139703  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:22.139718  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:22.139729  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:22.139978  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:22.139995  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:22.154517  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:22.477003  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:22.481834  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:22.688723  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:22.740215  563271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.222389934s)
	I0127 13:04:22.740286  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:22.740305  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:22.740634  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:22.740698  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:22.740724  563271 main.go:141] libmachine: Making call to close driver server
	I0127 13:04:22.740739  563271 main.go:141] libmachine: (addons-293977) Calling .Close
	I0127 13:04:22.741044  563271 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:04:22.741118  563271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:04:22.741070  563271 main.go:141] libmachine: (addons-293977) DBG | Closing plugin on server side
	I0127 13:04:22.742057  563271 addons.go:479] Verifying addon gcp-auth=true in "addons-293977"
	I0127 13:04:22.743679  563271 out.go:177] * Verifying gcp-auth addon...
	I0127 13:04:22.745885  563271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 13:04:22.786737  563271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 13:04:22.786765  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:22.976754  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:22.981813  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:23.156685  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:23.255477  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:23.473053  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:23.474849  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:23.539207  563271 pod_ready.go:103] pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:23.655477  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:23.757400  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:23.972914  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:23.973098  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:24.154764  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:24.248850  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:24.472813  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:24.473828  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:24.654829  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:24.750020  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:24.973990  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:24.974073  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:25.154610  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:25.249550  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:25.473316  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:25.473354  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:25.654913  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:25.748838  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:25.973456  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:25.974153  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:26.030041  563271 pod_ready.go:103] pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:26.155545  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:26.453318  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:26.553944  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:26.553969  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:26.655404  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:26.749864  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:26.972041  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:26.972883  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:27.154792  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:27.250706  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:27.472443  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:27.472720  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:27.654262  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:27.756538  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:27.972441  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:27.973364  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:28.157717  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:28.249220  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:28.476613  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:28.482565  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:28.532439  563271 pod_ready.go:103] pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:28.657032  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:28.749623  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:28.973208  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:28.973233  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:29.155443  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:29.249946  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:29.472915  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:29.473851  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:29.654421  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:29.749275  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:29.972508  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:29.972889  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:30.154569  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:30.248666  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:30.472977  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:30.538061  563271 pod_ready.go:93] pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:30.538088  563271 pod_ready.go:82] duration metric: took 11.513931255s for pod "amd-gpu-device-plugin-wrj9v" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.538099  563271 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gcjcj" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.540598  563271 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-gcjcj" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gcjcj" not found
	I0127 13:04:30.540618  563271 pod_ready.go:82] duration metric: took 2.503582ms for pod "coredns-668d6bf9bc-gcjcj" in "kube-system" namespace to be "Ready" ...
	E0127 13:04:30.540630  563271 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-gcjcj" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gcjcj" not found
	I0127 13:04:30.540638  563271 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-kxtqz" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.544349  563271 pod_ready.go:93] pod "coredns-668d6bf9bc-kxtqz" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:30.544367  563271 pod_ready.go:82] duration metric: took 3.719821ms for pod "coredns-668d6bf9bc-kxtqz" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.544379  563271 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.562265  563271 pod_ready.go:93] pod "etcd-addons-293977" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:30.562283  563271 pod_ready.go:82] duration metric: took 17.898179ms for pod "etcd-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.562293  563271 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.569279  563271 pod_ready.go:93] pod "kube-apiserver-addons-293977" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:30.569297  563271 pod_ready.go:82] duration metric: took 6.99877ms for pod "kube-apiserver-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.569306  563271 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.576545  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:30.654247  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:30.727791  563271 pod_ready.go:93] pod "kube-controller-manager-addons-293977" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:30.727811  563271 pod_ready.go:82] duration metric: took 158.498167ms for pod "kube-controller-manager-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.727823  563271 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-52h99" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:30.749620  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:30.972890  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:30.975071  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:31.127457  563271 pod_ready.go:93] pod "kube-proxy-52h99" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:31.127484  563271 pod_ready.go:82] duration metric: took 399.652711ms for pod "kube-proxy-52h99" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:31.127500  563271 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:31.155085  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:31.249462  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:31.472883  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:31.474914  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:31.527665  563271 pod_ready.go:93] pod "kube-scheduler-addons-293977" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:31.527697  563271 pod_ready.go:82] duration metric: took 400.188792ms for pod "kube-scheduler-addons-293977" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:31.527713  563271 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:31.655806  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:31.748610  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:31.974805  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:31.975491  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:32.154632  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:32.249679  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:32.472055  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:32.473383  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:32.655617  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:32.749130  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:32.973178  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:32.974215  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:33.154029  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:33.248803  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:33.472635  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:33.473020  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:33.535285  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:33.655853  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:33.749343  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:34.021112  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:34.021451  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:34.154494  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:34.248670  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:34.472830  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:34.473763  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:34.654960  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:34.755635  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:35.205184  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:35.206070  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:35.207089  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:35.250213  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:35.474154  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:35.474174  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:35.655412  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:35.751341  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:35.973222  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:35.973883  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:36.034919  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:36.154399  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:36.249358  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:36.473820  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:36.473892  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:36.655763  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:36.749499  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:36.973005  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:36.973245  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:37.155585  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:37.249340  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:37.473454  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:37.473674  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:37.657518  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:37.749208  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:37.974488  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:37.975667  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:38.154651  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:38.251141  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:38.473839  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:38.474449  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:38.535269  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:38.655295  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:38.749150  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:38.973800  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:38.974414  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:39.156595  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:39.249404  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:39.473689  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:39.473826  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:39.654588  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:39.749844  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:39.972433  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:39.973413  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:40.586011  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:40.586605  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:40.586847  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:40.595637  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:40.598054  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:40.655079  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:40.750276  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:40.976105  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:40.976651  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:41.155518  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:41.249618  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:41.473763  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:41.474697  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:41.655134  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:41.748862  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:41.971953  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:41.974322  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:42.155845  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:42.252315  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:42.474100  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:42.474453  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:42.656463  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:42.756027  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:42.972789  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:42.973036  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:43.033129  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:43.154318  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:43.249327  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:43.472846  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:43.473539  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:43.658040  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:43.749219  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:43.971669  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:43.971979  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:44.154852  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:44.249554  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:44.472153  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:44.472918  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:44.655488  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:44.749722  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:44.973508  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:44.973664  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:45.156660  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:45.250145  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:45.473205  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:45.473537  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:45.534677  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:45.655231  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:45.748809  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:45.972378  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:45.973151  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:46.155231  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:46.249547  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:46.472759  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:46.473522  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:46.655817  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:46.754749  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:46.972258  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:46.972728  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:47.155620  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:47.254984  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:47.473414  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:47.473803  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:47.535684  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:47.654257  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:47.749275  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:47.974335  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:47.974871  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:48.154154  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:48.249224  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:48.474321  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:48.475240  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:48.654707  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:48.754427  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:48.974325  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:48.974468  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:49.154313  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:49.249519  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:49.473092  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:49.473663  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:49.535957  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:49.655264  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:49.749447  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:49.972879  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:49.973288  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:50.161219  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:50.249601  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:50.472123  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:50.472404  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:50.719081  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:50.750046  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:50.973843  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:50.973937  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:51.157146  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:51.248512  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:51.473178  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:51.473873  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:51.654453  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:51.749046  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:51.981065  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:51.981312  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:52.032218  563271 pod_ready.go:103] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"False"
	I0127 13:04:52.155813  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:52.250473  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:52.473721  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:52.474154  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:52.655443  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:52.749748  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:52.975439  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:52.978808  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:53.034523  563271 pod_ready.go:93] pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:53.034548  563271 pod_ready.go:82] duration metric: took 21.506825785s for pod "metrics-server-7fbb699795-zqplg" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:53.034561  563271 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vf7zd" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:53.042884  563271 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-vf7zd" in "kube-system" namespace has status "Ready":"True"
	I0127 13:04:53.042903  563271 pod_ready.go:82] duration metric: took 8.334024ms for pod "nvidia-device-plugin-daemonset-vf7zd" in "kube-system" namespace to be "Ready" ...
	I0127 13:04:53.042923  563271 pod_ready.go:39] duration metric: took 34.060139466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:04:53.042946  563271 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:04:53.043010  563271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:04:53.109850  563271 api_server.go:72] duration metric: took 41.73899735s to wait for apiserver process to appear ...
	I0127 13:04:53.109876  563271 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:04:53.109918  563271 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0127 13:04:53.117070  563271 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0127 13:04:53.118575  563271 api_server.go:141] control plane version: v1.32.1
	I0127 13:04:53.118598  563271 api_server.go:131] duration metric: took 8.714426ms to wait for apiserver health ...
	I0127 13:04:53.118607  563271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:04:53.131056  563271 system_pods.go:59] 18 kube-system pods found
	I0127 13:04:53.131083  563271 system_pods.go:61] "amd-gpu-device-plugin-wrj9v" [bca4f181-8bca-4ed5-a27f-031a1fe996e5] Running
	I0127 13:04:53.131087  563271 system_pods.go:61] "coredns-668d6bf9bc-kxtqz" [818054f1-7247-4f59-a8ac-01d9e4378e0a] Running
	I0127 13:04:53.131093  563271 system_pods.go:61] "csi-hostpath-attacher-0" [448b5e4b-91fc-427e-a618-e00b5252243f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 13:04:53.131099  563271 system_pods.go:61] "csi-hostpath-resizer-0" [bdccaf65-aa95-40ab-83d4-062e5b3b5228] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 13:04:53.131106  563271 system_pods.go:61] "csi-hostpathplugin-xwwzn" [8d12bed2-87a5-4810-a464-a3588a3d81da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 13:04:53.131113  563271 system_pods.go:61] "etcd-addons-293977" [b766f477-c36a-4c3f-a1d9-bc084b19dbc1] Running
	I0127 13:04:53.131120  563271 system_pods.go:61] "kube-apiserver-addons-293977" [8d6d6622-b93c-43cc-a355-3f0bc77c8d04] Running
	I0127 13:04:53.131124  563271 system_pods.go:61] "kube-controller-manager-addons-293977" [7d1ba66c-a391-4933-8ff5-8a1b7acc5f54] Running
	I0127 13:04:53.131127  563271 system_pods.go:61] "kube-ingress-dns-minikube" [7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c] Running
	I0127 13:04:53.131131  563271 system_pods.go:61] "kube-proxy-52h99" [4e15d3d4-d720-41e9-bbda-1b41052ff10b] Running
	I0127 13:04:53.131135  563271 system_pods.go:61] "kube-scheduler-addons-293977" [59738e41-2e02-4dee-a6a7-3d72692cccb2] Running
	I0127 13:04:53.131140  563271 system_pods.go:61] "metrics-server-7fbb699795-zqplg" [7bf6ee82-ae62-490e-8bc8-7ec6dd29d885] Running
	I0127 13:04:53.131143  563271 system_pods.go:61] "nvidia-device-plugin-daemonset-vf7zd" [610bfd51-5c3c-4482-87c1-ef8a1006a42d] Running
	I0127 13:04:53.131146  563271 system_pods.go:61] "registry-6c88467877-6k6d6" [a783b125-0ec9-4bd0-bb67-3c277fbbe585] Running
	I0127 13:04:53.131151  563271 system_pods.go:61] "registry-proxy-bf7ln" [5f7ccfa7-9c68-43e2-8b05-519e511c9924] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 13:04:53.131161  563271 system_pods.go:61] "snapshot-controller-68b874b76f-q5jrw" [19ee5ad8-06a1-4624-9953-9efa38f15281] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 13:04:53.131166  563271 system_pods.go:61] "snapshot-controller-68b874b76f-tng4d" [b1f58a82-5265-4fe6-956f-5d4b97b2ab1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 13:04:53.131170  563271 system_pods.go:61] "storage-provisioner" [d45d5461-e306-4890-9ad6-c68892c1920c] Running
	I0127 13:04:53.131179  563271 system_pods.go:74] duration metric: took 12.566507ms to wait for pod list to return data ...
	I0127 13:04:53.131186  563271 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:04:53.134272  563271 default_sa.go:45] found service account: "default"
	I0127 13:04:53.134288  563271 default_sa.go:55] duration metric: took 3.094151ms for default service account to be created ...
	I0127 13:04:53.134296  563271 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:04:53.141683  563271 system_pods.go:87] 18 kube-system pods found
	I0127 13:04:53.144016  563271 system_pods.go:105] "amd-gpu-device-plugin-wrj9v" [bca4f181-8bca-4ed5-a27f-031a1fe996e5] Running
	I0127 13:04:53.144031  563271 system_pods.go:105] "coredns-668d6bf9bc-kxtqz" [818054f1-7247-4f59-a8ac-01d9e4378e0a] Running
	I0127 13:04:53.144039  563271 system_pods.go:105] "csi-hostpath-attacher-0" [448b5e4b-91fc-427e-a618-e00b5252243f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 13:04:53.144047  563271 system_pods.go:105] "csi-hostpath-resizer-0" [bdccaf65-aa95-40ab-83d4-062e5b3b5228] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 13:04:53.144059  563271 system_pods.go:105] "csi-hostpathplugin-xwwzn" [8d12bed2-87a5-4810-a464-a3588a3d81da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 13:04:53.144073  563271 system_pods.go:105] "etcd-addons-293977" [b766f477-c36a-4c3f-a1d9-bc084b19dbc1] Running
	I0127 13:04:53.144081  563271 system_pods.go:105] "kube-apiserver-addons-293977" [8d6d6622-b93c-43cc-a355-3f0bc77c8d04] Running
	I0127 13:04:53.144089  563271 system_pods.go:105] "kube-controller-manager-addons-293977" [7d1ba66c-a391-4933-8ff5-8a1b7acc5f54] Running
	I0127 13:04:53.144094  563271 system_pods.go:105] "kube-ingress-dns-minikube" [7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c] Running
	I0127 13:04:53.144099  563271 system_pods.go:105] "kube-proxy-52h99" [4e15d3d4-d720-41e9-bbda-1b41052ff10b] Running
	I0127 13:04:53.144104  563271 system_pods.go:105] "kube-scheduler-addons-293977" [59738e41-2e02-4dee-a6a7-3d72692cccb2] Running
	I0127 13:04:53.144108  563271 system_pods.go:105] "metrics-server-7fbb699795-zqplg" [7bf6ee82-ae62-490e-8bc8-7ec6dd29d885] Running
	I0127 13:04:53.144112  563271 system_pods.go:105] "nvidia-device-plugin-daemonset-vf7zd" [610bfd51-5c3c-4482-87c1-ef8a1006a42d] Running
	I0127 13:04:53.144116  563271 system_pods.go:105] "registry-6c88467877-6k6d6" [a783b125-0ec9-4bd0-bb67-3c277fbbe585] Running
	I0127 13:04:53.144122  563271 system_pods.go:105] "registry-proxy-bf7ln" [5f7ccfa7-9c68-43e2-8b05-519e511c9924] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 13:04:53.144128  563271 system_pods.go:105] "snapshot-controller-68b874b76f-q5jrw" [19ee5ad8-06a1-4624-9953-9efa38f15281] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 13:04:53.144135  563271 system_pods.go:105] "snapshot-controller-68b874b76f-tng4d" [b1f58a82-5265-4fe6-956f-5d4b97b2ab1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 13:04:53.144141  563271 system_pods.go:105] "storage-provisioner" [d45d5461-e306-4890-9ad6-c68892c1920c] Running
	I0127 13:04:53.144149  563271 system_pods.go:147] duration metric: took 9.846716ms to wait for k8s-apps to be running ...
	I0127 13:04:53.144158  563271 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 13:04:53.144205  563271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:04:53.156705  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:53.181047  563271 system_svc.go:56] duration metric: took 36.884057ms WaitForService to wait for kubelet
	I0127 13:04:53.181072  563271 kubeadm.go:582] duration metric: took 41.810222703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:04:53.181096  563271 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:04:53.192512  563271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:04:53.192534  563271 node_conditions.go:123] node cpu capacity is 2
	I0127 13:04:53.192547  563271 node_conditions.go:105] duration metric: took 11.44264ms to run NodePressure ...
	I0127 13:04:53.192559  563271 start.go:241] waiting for startup goroutines ...
	I0127 13:04:53.249632  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:53.477700  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:53.478368  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:53.655081  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:53.749638  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:53.972751  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:53.973489  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:54.154947  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:54.250298  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:54.473290  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:54.473745  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:54.655583  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:54.755695  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:54.973066  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 13:04:54.973362  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:55.155748  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:55.249141  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:55.473489  563271 kapi.go:107] duration metric: took 36.505221231s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 13:04:55.474066  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:55.654307  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:55.749176  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:55.972695  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:56.154833  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:56.248760  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:56.473230  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:56.655778  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:56.748611  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:56.972891  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:57.155169  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:57.249540  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:57.472989  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:57.654696  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:57.749690  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:57.972223  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:58.154816  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:58.248869  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:58.472574  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:58.655374  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:58.749485  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:58.973092  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:59.154993  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:59.248950  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:59.473829  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:04:59.666183  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:04:59.771106  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:04:59.972775  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:00.158591  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:00.249073  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:00.473269  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:00.655023  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:00.749956  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:01.080981  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:01.154714  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:01.249843  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:01.473053  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:01.659968  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:01.749474  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:01.972449  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:02.155054  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:02.249564  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:02.471983  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:02.654033  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:02.748981  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:02.973531  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:03.441306  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:03.441862  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:03.472351  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:03.655597  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:03.755827  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:03.972360  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:04.154430  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:04.249219  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:04.479052  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:04.656032  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:04.749800  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:04.972738  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:05.155638  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:05.249725  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:05.476064  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:05.654497  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:05.749201  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:05.973894  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:06.155324  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:06.249227  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:06.474992  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:06.658591  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:06.749342  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:06.972745  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:07.155064  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:07.250001  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:07.477535  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:07.654901  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:07.748617  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:07.973092  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:08.155695  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:08.249299  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:08.473973  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:08.656458  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:08.749606  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:08.976322  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:09.156827  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:09.256686  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:09.474273  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:09.659408  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:09.748533  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:09.972165  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:10.154601  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:10.249602  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:10.472519  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:11.004352  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:11.004838  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:11.005372  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:11.156220  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:11.255814  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:11.473594  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:11.660308  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:11.748892  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:11.972748  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:12.155665  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:12.249404  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:12.472613  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:12.655903  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:12.755121  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:12.973063  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:13.155003  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:13.249949  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:13.473563  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:13.658777  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:13.749525  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:13.980305  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:14.154653  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:14.249124  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:14.472963  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:14.654195  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:14.749277  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:14.972726  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:15.156871  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:15.249348  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:15.474103  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:15.658654  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:15.752615  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:15.972474  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:16.155179  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:16.253170  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:16.472484  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:16.655780  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:16.748649  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:16.973326  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:17.155852  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:17.248622  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:17.472281  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:17.659770  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:18.079820  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:18.080353  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:18.154305  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:18.250015  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:18.473296  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:18.654820  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:18.749467  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:18.972841  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:19.154579  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:19.249529  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:19.473994  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:19.655516  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:19.749091  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:19.974010  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:20.154731  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:20.249393  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:20.472811  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:20.654298  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 13:05:20.749225  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:20.973665  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:21.154918  563271 kapi.go:107] duration metric: took 1m0.004654392s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 13:05:21.248515  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:21.471980  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:21.749228  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:21.972926  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:22.249552  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:22.471761  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:22.748832  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:22.979458  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:23.248809  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:23.472409  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:23.749183  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:23.973219  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:24.249464  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:24.473229  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:24.750356  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:24.975003  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:25.249129  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:25.473113  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:25.758465  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:26.207913  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:26.248979  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:26.472513  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:26.749689  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:26.973045  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:27.250336  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:27.473427  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:27.749410  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:27.972268  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:28.249501  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:28.476487  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:28.748899  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:28.972628  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:29.249708  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:29.473117  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:29.749939  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:29.973155  563271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 13:05:30.249372  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:30.477403  563271 kapi.go:107] duration metric: took 1m11.509170598s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 13:05:30.750918  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:31.249404  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:31.750669  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:32.249929  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:32.750866  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:33.249833  563271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 13:05:33.750708  563271 kapi.go:107] duration metric: took 1m11.004818096s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 13:05:33.752043  563271 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-293977 cluster.
	I0127 13:05:33.753211  563271 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 13:05:33.754236  563271 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 13:05:33.755600  563271 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, metrics-server, cloud-spanner, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 13:05:33.756748  563271 addons.go:514] duration metric: took 1m22.385846318s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns inspektor-gadget metrics-server cloud-spanner storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 13:05:33.756793  563271 start.go:246] waiting for cluster config update ...
	I0127 13:05:33.756814  563271 start.go:255] writing updated cluster config ...
	I0127 13:05:33.757077  563271 ssh_runner.go:195] Run: rm -f paused
	I0127 13:05:33.809059  563271 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:05:33.810371  563271 out.go:177] * Done! kubectl is now configured to use "addons-293977" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.047870236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983313047846395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603885,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=762297b6-1441-4cdc-8e0d-200fd2efc36a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.048750478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d67bc1e-7fd5-49a0-a8b9-5a5040775f4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.048821476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d67bc1e-7fd5-49a0-a8b9-5a5040775f4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.049196356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5f21d6d63084af515ad6f905935bcd1f61a7c6de41db233d8db1f244b67ea7c,PodSandboxId:f5ad5561d6c3a09295e902bc254abbee3d7e458e34fee99a55dd92898ff5708e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1737983312929271039,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-2lwj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3d3a02e-4ae7-4857-90cb-f19a06b736fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e08d5a5b6b24c394738ceb6d56ce222c0befd19f2584e4ac5c46d910483bb2,PodSandboxId:00b1487f807e60daf3e84ebea75ac0ea67ed1e7c9994c0a5a4aca84bf7f2c727,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737983174489026807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08949e9b-1809-4b30-b1c1-81a95fc4e265,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2bca329aab44d8098b40610138d5acfaa5df40fb838eb3247d28a29ab93f31,PodSandboxId:8deb1f3d0aa42ccc8dfcc23aa1373a05935e342637e881acfb6f0297b165e33d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737983136931294068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1608708-17ef-4056-9d
18-636402de8414,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ec335cdc613443c3908b7c5af519aa44a1fedc66948936c37df1125774173,PodSandboxId:b75f7ea467bd9965055312ca710bbef4557862ed6b9908953df31dcdb796de2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737983130006401953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qqlmx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58235f6c-87ba-406a-8508-67b9c02fff9d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40f564680f43183cc9452cf3f22fead9c05e59b7861a5a76e6379c47c48972d5,PodSandboxId:7c0e6d48982613e201cf31bd364c4e29bb765c531622142ee11a3a11f63f4e01,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983125327237486,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6pthl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36632489-119d-4305-8fdd-b216db176ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95d5981c947d8aead7069166756076948bdc365c17733db120312abba8a2571,PodSandboxId:e2b9aa936389ecc25520c5178b5c9b0776a1b65edde8827ee8ed8cfd8216e848,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983109191882748,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97cts,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfbe09d-0a2d-43b4-a0b2-6ae2d0f9886b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e049a16488bd4c8630d7ba744592110f23e153279acc550cef0f89022a160,PodSandboxId:9ad8db5895850575d43c6176e30acd8b66f2e30cf86b22a604fea01b2aa29d31,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876
f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737983069289928822,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wrj9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca4f181-8bca-4ed5-a27f-031a1fe996e5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101f49559e0f824944ba5573aa6d7991e028e3d6918edfb6c09d29c1169b4df2,PodSandboxId:7566a3a1a6e58c9c503d856efd1eb665325b8b85fbe2ca2648903e32fd3bff6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-
dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737983067770915717,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486d257962ff729d348d69eeb1bc32c0a4638d3a0932dbb2d5f8be53aae73936,PodSandboxId:540c47fd93f23afcd30de056ce190e414847bf759c529b1e479d9a
ff58c4bb4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983057864236642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45d5461-e306-4890-9ad6-c68892c1920c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580558ae99d9b8c69f5dd90dde85221ef11e09e2f56565e2d5e8ddc88b6feb2c,PodSandboxId:90c67a5938c6fe4c75ef1eb7f764c9365708f07308caa6cfec24dafef66813c8,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983054513087925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxtqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818054f1-7247-4f59-a8ac-01d9e4378e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:61b98bc143e97904cccb969a6bab6f553d1fa1c0158bae3d558ca42e77e34984,PodSandboxId:3a20a8387f705fa3a99268370ce87e7e12a643df58083ac2c2700a8c1c589fc4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983051964053366,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52h99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15d3d4-d720-41e9-bbda-1b41052ff10b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:80fd3b24c92d20b6140404d5e9aa534dd065bd3c0e896fe7536d16c7c631d145,PodSandboxId:f48fb187ad4496a519151b583ead234f32ab70280ac36a62644cc0d244e7d919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983040180744819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590ec858b2bf0331498d19c73bd93f28,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:fbac5e1ab44e685d2a1e3b071d52b53f796ace79bdebc6bde55fc2fcd3ce34e8,PodSandboxId:9ca8d251ae63e3148427729b3a15fb15684505824229e42e4b61cfd28ae75513,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983040187805139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f29e63444cc834a6abc226f6dd07933,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a12293bc7de4536bea
c5bf167816e62fb7ce55ee90299b4e43cc44a16ec50e,PodSandboxId:a61747ec5e80ea1e23d097f1f9c1987bd5884921bfdd47024c8289e4c6869810,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983040202968287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80986f01e3f0b219f43b093f6bad6e15,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9e85b05729bab833eb805cf37ecad709ba1
bb364d6fd2980b037fa889d8ff3,PodSandboxId:c5a5611ba748bd6468a27c5d761d10a2aff7711bf559e474557133a33ca5596f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983040134699227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 657df07ffa4d67d86fbfc15ed4a086d5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d67bc1e-7fd5-49a0-
a8b9-5a5040775f4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.089823232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ad794df-6357-4f28-a9e0-d795f63459eb name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.089894750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ad794df-6357-4f28-a9e0-d795f63459eb name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.091351796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1b06dbe-f8f7-4171-b7b2-8b05928e1c61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.092516819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983313092492951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603885,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1b06dbe-f8f7-4171-b7b2-8b05928e1c61 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.092889989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=712f35e5-11bc-41ff-8bda-1450660e6a73 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.092969500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=712f35e5-11bc-41ff-8bda-1450660e6a73 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.093415663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5f21d6d63084af515ad6f905935bcd1f61a7c6de41db233d8db1f244b67ea7c,PodSandboxId:f5ad5561d6c3a09295e902bc254abbee3d7e458e34fee99a55dd92898ff5708e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1737983312929271039,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-2lwj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3d3a02e-4ae7-4857-90cb-f19a06b736fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e08d5a5b6b24c394738ceb6d56ce222c0befd19f2584e4ac5c46d910483bb2,PodSandboxId:00b1487f807e60daf3e84ebea75ac0ea67ed1e7c9994c0a5a4aca84bf7f2c727,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737983174489026807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08949e9b-1809-4b30-b1c1-81a95fc4e265,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2bca329aab44d8098b40610138d5acfaa5df40fb838eb3247d28a29ab93f31,PodSandboxId:8deb1f3d0aa42ccc8dfcc23aa1373a05935e342637e881acfb6f0297b165e33d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737983136931294068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1608708-17ef-4056-9d
18-636402de8414,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ec335cdc613443c3908b7c5af519aa44a1fedc66948936c37df1125774173,PodSandboxId:b75f7ea467bd9965055312ca710bbef4557862ed6b9908953df31dcdb796de2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737983130006401953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qqlmx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58235f6c-87ba-406a-8508-67b9c02fff9d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40f564680f43183cc9452cf3f22fead9c05e59b7861a5a76e6379c47c48972d5,PodSandboxId:7c0e6d48982613e201cf31bd364c4e29bb765c531622142ee11a3a11f63f4e01,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983125327237486,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6pthl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36632489-119d-4305-8fdd-b216db176ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95d5981c947d8aead7069166756076948bdc365c17733db120312abba8a2571,PodSandboxId:e2b9aa936389ecc25520c5178b5c9b0776a1b65edde8827ee8ed8cfd8216e848,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983109191882748,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97cts,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfbe09d-0a2d-43b4-a0b2-6ae2d0f9886b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e049a16488bd4c8630d7ba744592110f23e153279acc550cef0f89022a160,PodSandboxId:9ad8db5895850575d43c6176e30acd8b66f2e30cf86b22a604fea01b2aa29d31,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876
f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737983069289928822,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wrj9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca4f181-8bca-4ed5-a27f-031a1fe996e5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101f49559e0f824944ba5573aa6d7991e028e3d6918edfb6c09d29c1169b4df2,PodSandboxId:7566a3a1a6e58c9c503d856efd1eb665325b8b85fbe2ca2648903e32fd3bff6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-
dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737983067770915717,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486d257962ff729d348d69eeb1bc32c0a4638d3a0932dbb2d5f8be53aae73936,PodSandboxId:540c47fd93f23afcd30de056ce190e414847bf759c529b1e479d9a
ff58c4bb4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983057864236642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45d5461-e306-4890-9ad6-c68892c1920c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580558ae99d9b8c69f5dd90dde85221ef11e09e2f56565e2d5e8ddc88b6feb2c,PodSandboxId:90c67a5938c6fe4c75ef1eb7f764c9365708f07308caa6cfec24dafef66813c8,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983054513087925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxtqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818054f1-7247-4f59-a8ac-01d9e4378e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:61b98bc143e97904cccb969a6bab6f553d1fa1c0158bae3d558ca42e77e34984,PodSandboxId:3a20a8387f705fa3a99268370ce87e7e12a643df58083ac2c2700a8c1c589fc4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983051964053366,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52h99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15d3d4-d720-41e9-bbda-1b41052ff10b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:80fd3b24c92d20b6140404d5e9aa534dd065bd3c0e896fe7536d16c7c631d145,PodSandboxId:f48fb187ad4496a519151b583ead234f32ab70280ac36a62644cc0d244e7d919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983040180744819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590ec858b2bf0331498d19c73bd93f28,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:fbac5e1ab44e685d2a1e3b071d52b53f796ace79bdebc6bde55fc2fcd3ce34e8,PodSandboxId:9ca8d251ae63e3148427729b3a15fb15684505824229e42e4b61cfd28ae75513,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983040187805139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f29e63444cc834a6abc226f6dd07933,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a12293bc7de4536bea
c5bf167816e62fb7ce55ee90299b4e43cc44a16ec50e,PodSandboxId:a61747ec5e80ea1e23d097f1f9c1987bd5884921bfdd47024c8289e4c6869810,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983040202968287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80986f01e3f0b219f43b093f6bad6e15,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9e85b05729bab833eb805cf37ecad709ba1
bb364d6fd2980b037fa889d8ff3,PodSandboxId:c5a5611ba748bd6468a27c5d761d10a2aff7711bf559e474557133a33ca5596f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983040134699227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 657df07ffa4d67d86fbfc15ed4a086d5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=712f35e5-11bc-41ff-
8bda-1450660e6a73 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.123719335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c53aa77-8a48-45fd-9f85-5ffe1c810463 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.123800670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c53aa77-8a48-45fd-9f85-5ffe1c810463 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.125225673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70231f70-e8c6-4903-96b3-5fdbd6cdf123 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.126502805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983313126483139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603885,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70231f70-e8c6-4903-96b3-5fdbd6cdf123 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.127052111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9d2227d-c0cb-4ec5-ba1d-53f95c2c5a7f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.127100220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9d2227d-c0cb-4ec5-ba1d-53f95c2c5a7f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.127519674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5f21d6d63084af515ad6f905935bcd1f61a7c6de41db233d8db1f244b67ea7c,PodSandboxId:f5ad5561d6c3a09295e902bc254abbee3d7e458e34fee99a55dd92898ff5708e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1737983312929271039,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-2lwj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3d3a02e-4ae7-4857-90cb-f19a06b736fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e08d5a5b6b24c394738ceb6d56ce222c0befd19f2584e4ac5c46d910483bb2,PodSandboxId:00b1487f807e60daf3e84ebea75ac0ea67ed1e7c9994c0a5a4aca84bf7f2c727,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737983174489026807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08949e9b-1809-4b30-b1c1-81a95fc4e265,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2bca329aab44d8098b40610138d5acfaa5df40fb838eb3247d28a29ab93f31,PodSandboxId:8deb1f3d0aa42ccc8dfcc23aa1373a05935e342637e881acfb6f0297b165e33d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737983136931294068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1608708-17ef-4056-9d
18-636402de8414,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ec335cdc613443c3908b7c5af519aa44a1fedc66948936c37df1125774173,PodSandboxId:b75f7ea467bd9965055312ca710bbef4557862ed6b9908953df31dcdb796de2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737983130006401953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qqlmx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58235f6c-87ba-406a-8508-67b9c02fff9d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40f564680f43183cc9452cf3f22fead9c05e59b7861a5a76e6379c47c48972d5,PodSandboxId:7c0e6d48982613e201cf31bd364c4e29bb765c531622142ee11a3a11f63f4e01,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983125327237486,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6pthl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36632489-119d-4305-8fdd-b216db176ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95d5981c947d8aead7069166756076948bdc365c17733db120312abba8a2571,PodSandboxId:e2b9aa936389ecc25520c5178b5c9b0776a1b65edde8827ee8ed8cfd8216e848,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983109191882748,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97cts,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfbe09d-0a2d-43b4-a0b2-6ae2d0f9886b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e049a16488bd4c8630d7ba744592110f23e153279acc550cef0f89022a160,PodSandboxId:9ad8db5895850575d43c6176e30acd8b66f2e30cf86b22a604fea01b2aa29d31,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876
f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737983069289928822,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wrj9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca4f181-8bca-4ed5-a27f-031a1fe996e5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101f49559e0f824944ba5573aa6d7991e028e3d6918edfb6c09d29c1169b4df2,PodSandboxId:7566a3a1a6e58c9c503d856efd1eb665325b8b85fbe2ca2648903e32fd3bff6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-
dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737983067770915717,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486d257962ff729d348d69eeb1bc32c0a4638d3a0932dbb2d5f8be53aae73936,PodSandboxId:540c47fd93f23afcd30de056ce190e414847bf759c529b1e479d9a
ff58c4bb4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983057864236642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45d5461-e306-4890-9ad6-c68892c1920c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580558ae99d9b8c69f5dd90dde85221ef11e09e2f56565e2d5e8ddc88b6feb2c,PodSandboxId:90c67a5938c6fe4c75ef1eb7f764c9365708f07308caa6cfec24dafef66813c8,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983054513087925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxtqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818054f1-7247-4f59-a8ac-01d9e4378e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:61b98bc143e97904cccb969a6bab6f553d1fa1c0158bae3d558ca42e77e34984,PodSandboxId:3a20a8387f705fa3a99268370ce87e7e12a643df58083ac2c2700a8c1c589fc4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983051964053366,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52h99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15d3d4-d720-41e9-bbda-1b41052ff10b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:80fd3b24c92d20b6140404d5e9aa534dd065bd3c0e896fe7536d16c7c631d145,PodSandboxId:f48fb187ad4496a519151b583ead234f32ab70280ac36a62644cc0d244e7d919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983040180744819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590ec858b2bf0331498d19c73bd93f28,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:fbac5e1ab44e685d2a1e3b071d52b53f796ace79bdebc6bde55fc2fcd3ce34e8,PodSandboxId:9ca8d251ae63e3148427729b3a15fb15684505824229e42e4b61cfd28ae75513,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983040187805139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f29e63444cc834a6abc226f6dd07933,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a12293bc7de4536bea
c5bf167816e62fb7ce55ee90299b4e43cc44a16ec50e,PodSandboxId:a61747ec5e80ea1e23d097f1f9c1987bd5884921bfdd47024c8289e4c6869810,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983040202968287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80986f01e3f0b219f43b093f6bad6e15,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9e85b05729bab833eb805cf37ecad709ba1
bb364d6fd2980b037fa889d8ff3,PodSandboxId:c5a5611ba748bd6468a27c5d761d10a2aff7711bf559e474557133a33ca5596f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983040134699227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 657df07ffa4d67d86fbfc15ed4a086d5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9d2227d-c0cb-4ec5-
ba1d-53f95c2c5a7f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.163063473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab5666b8-82a1-47f2-9918-ac2bffcd3ee6 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.163111391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab5666b8-82a1-47f2-9918-ac2bffcd3ee6 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.164523433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6315b9b-d6ec-47f4-b2c9-a0224b102267 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.166913322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983313166887753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603885,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6315b9b-d6ec-47f4-b2c9-a0224b102267 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.167700664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bead0f0-15e2-4e3b-b2bb-d2334030636e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.167746444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bead0f0-15e2-4e3b-b2bb-d2334030636e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:08:33 addons-293977 crio[667]: time="2025-01-27 13:08:33.168112604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5f21d6d63084af515ad6f905935bcd1f61a7c6de41db233d8db1f244b67ea7c,PodSandboxId:f5ad5561d6c3a09295e902bc254abbee3d7e458e34fee99a55dd92898ff5708e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1737983312929271039,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-2lwj5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3d3a02e-4ae7-4857-90cb-f19a06b736fe,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e08d5a5b6b24c394738ceb6d56ce222c0befd19f2584e4ac5c46d910483bb2,PodSandboxId:00b1487f807e60daf3e84ebea75ac0ea67ed1e7c9994c0a5a4aca84bf7f2c727,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737983174489026807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08949e9b-1809-4b30-b1c1-81a95fc4e265,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2bca329aab44d8098b40610138d5acfaa5df40fb838eb3247d28a29ab93f31,PodSandboxId:8deb1f3d0aa42ccc8dfcc23aa1373a05935e342637e881acfb6f0297b165e33d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737983136931294068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b1608708-17ef-4056-9d
18-636402de8414,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ec335cdc613443c3908b7c5af519aa44a1fedc66948936c37df1125774173,PodSandboxId:b75f7ea467bd9965055312ca710bbef4557862ed6b9908953df31dcdb796de2f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737983130006401953,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-qqlmx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58235f6c-87ba-406a-8508-67b9c02fff9d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:40f564680f43183cc9452cf3f22fead9c05e59b7861a5a76e6379c47c48972d5,PodSandboxId:7c0e6d48982613e201cf31bd364c4e29bb765c531622142ee11a3a11f63f4e01,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983125327237486,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6pthl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36632489-119d-4305-8fdd-b216db176ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95d5981c947d8aead7069166756076948bdc365c17733db120312abba8a2571,PodSandboxId:e2b9aa936389ecc25520c5178b5c9b0776a1b65edde8827ee8ed8cfd8216e848,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737983109191882748,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97cts,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9bfbe09d-0a2d-43b4-a0b2-6ae2d0f9886b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56e049a16488bd4c8630d7ba744592110f23e153279acc550cef0f89022a160,PodSandboxId:9ad8db5895850575d43c6176e30acd8b66f2e30cf86b22a604fea01b2aa29d31,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876
f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737983069289928822,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wrj9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca4f181-8bca-4ed5-a27f-031a1fe996e5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101f49559e0f824944ba5573aa6d7991e028e3d6918edfb6c09d29c1169b4df2,PodSandboxId:7566a3a1a6e58c9c503d856efd1eb665325b8b85fbe2ca2648903e32fd3bff6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-
dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737983067770915717,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c29c70b-16a3-4e45-b0ab-6ed9525b0c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:486d257962ff729d348d69eeb1bc32c0a4638d3a0932dbb2d5f8be53aae73936,PodSandboxId:540c47fd93f23afcd30de056ce190e414847bf759c529b1e479d9a
ff58c4bb4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983057864236642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d45d5461-e306-4890-9ad6-c68892c1920c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:580558ae99d9b8c69f5dd90dde85221ef11e09e2f56565e2d5e8ddc88b6feb2c,PodSandboxId:90c67a5938c6fe4c75ef1eb7f764c9365708f07308caa6cfec24dafef66813c8,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983054513087925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kxtqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 818054f1-7247-4f59-a8ac-01d9e4378e0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:61b98bc143e97904cccb969a6bab6f553d1fa1c0158bae3d558ca42e77e34984,PodSandboxId:3a20a8387f705fa3a99268370ce87e7e12a643df58083ac2c2700a8c1c589fc4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983051964053366,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-52h99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e15d3d4-d720-41e9-bbda-1b41052ff10b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:80fd3b24c92d20b6140404d5e9aa534dd065bd3c0e896fe7536d16c7c631d145,PodSandboxId:f48fb187ad4496a519151b583ead234f32ab70280ac36a62644cc0d244e7d919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983040180744819,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590ec858b2bf0331498d19c73bd93f28,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:fbac5e1ab44e685d2a1e3b071d52b53f796ace79bdebc6bde55fc2fcd3ce34e8,PodSandboxId:9ca8d251ae63e3148427729b3a15fb15684505824229e42e4b61cfd28ae75513,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983040187805139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f29e63444cc834a6abc226f6dd07933,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a12293bc7de4536bea
c5bf167816e62fb7ce55ee90299b4e43cc44a16ec50e,PodSandboxId:a61747ec5e80ea1e23d097f1f9c1987bd5884921bfdd47024c8289e4c6869810,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983040202968287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80986f01e3f0b219f43b093f6bad6e15,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9e85b05729bab833eb805cf37ecad709ba1
bb364d6fd2980b037fa889d8ff3,PodSandboxId:c5a5611ba748bd6468a27c5d761d10a2aff7711bf559e474557133a33ca5596f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983040134699227,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-293977,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 657df07ffa4d67d86fbfc15ed4a086d5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bead0f0-15e2-4e3b-
b2bb-d2334030636e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	c5f21d6d63084       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f5ad5561d6c3a       hello-world-app-7d9564db4-2lwj5
	e4e08d5a5b6b2       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago            Running             nginx                     0                   00b1487f807e6       nginx
	7b2bca329aab4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   8deb1f3d0aa42       busybox
	219ec335cdc61       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   b75f7ea467bd9       ingress-nginx-controller-56d7c84fd4-qqlmx
	40f564680f431       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago            Exited              patch                     2                   7c0e6d4898261       ingress-nginx-admission-patch-6pthl
	b95d5981c947d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   e2b9aa936389e       ingress-nginx-admission-create-97cts
	a56e049a16488       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   9ad8db5895850       amd-gpu-device-plugin-wrj9v
	101f49559e0f8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   7566a3a1a6e58       kube-ingress-dns-minikube
	486d257962ff7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   540c47fd93f23       storage-provisioner
	580558ae99d9b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   90c67a5938c6f       coredns-668d6bf9bc-kxtqz
	61b98bc143e97       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago            Running             kube-proxy                0                   3a20a8387f705       kube-proxy-52h99
	17a12293bc7de       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago            Running             kube-apiserver            0                   a61747ec5e80e       kube-apiserver-addons-293977
	fbac5e1ab44e6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago            Running             etcd                      0                   9ca8d251ae63e       etcd-addons-293977
	80fd3b24c92d2       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago            Running             kube-controller-manager   0                   f48fb187ad449       kube-controller-manager-addons-293977
	2a9e85b05729b       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago            Running             kube-scheduler            0                   c5a5611ba748b       kube-scheduler-addons-293977
	
	
	==> coredns [580558ae99d9b8c69f5dd90dde85221ef11e09e2f56565e2d5e8ddc88b6feb2c] <==
	[INFO] 10.244.0.8:38854 - 29240 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000131112s
	[INFO] 10.244.0.8:38854 - 48021 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000092189s
	[INFO] 10.244.0.8:38854 - 941 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000089337s
	[INFO] 10.244.0.8:38854 - 38811 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111052s
	[INFO] 10.244.0.8:38854 - 46409 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00020163s
	[INFO] 10.244.0.8:38854 - 18467 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097698s
	[INFO] 10.244.0.8:38854 - 51796 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000113058s
	[INFO] 10.244.0.8:46710 - 36624 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000127868s
	[INFO] 10.244.0.8:46710 - 36347 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00019562s
	[INFO] 10.244.0.8:42442 - 26999 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100607s
	[INFO] 10.244.0.8:42442 - 26720 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105767s
	[INFO] 10.244.0.8:45928 - 31983 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083381s
	[INFO] 10.244.0.8:45928 - 31506 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081919s
	[INFO] 10.244.0.8:36120 - 54464 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083538s
	[INFO] 10.244.0.8:36120 - 54271 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130181s
	[INFO] 10.244.0.23:60534 - 42146 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002775s
	[INFO] 10.244.0.23:42615 - 49331 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187157s
	[INFO] 10.244.0.23:51323 - 14147 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125076s
	[INFO] 10.244.0.23:38043 - 23761 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093709s
	[INFO] 10.244.0.23:57984 - 32374 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081203s
	[INFO] 10.244.0.23:50895 - 10382 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066162s
	[INFO] 10.244.0.23:51215 - 64034 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.002471994s
	[INFO] 10.244.0.23:54092 - 40347 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003928821s
	[INFO] 10.244.0.27:60323 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000373776s
	[INFO] 10.244.0.27:44530 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120468s
	
	
	==> describe nodes <==
	Name:               addons-293977
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-293977
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=addons-293977
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_04_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-293977
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:04:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-293977
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:08:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:06:48 +0000   Mon, 27 Jan 2025 13:04:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:06:48 +0000   Mon, 27 Jan 2025 13:04:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:06:48 +0000   Mon, 27 Jan 2025 13:04:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:06:48 +0000   Mon, 27 Jan 2025 13:04:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    addons-293977
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d2703ef8514407893a541f9b6eb575
	  System UUID:                38d2703e-f851-4407-893a-541f9b6eb575
	  Boot ID:                    13245e27-a536-4205-8361-ab6055fbc59d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     hello-world-app-7d9564db4-2lwj5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-qqlmx    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m15s
	  kube-system                 amd-gpu-device-plugin-wrj9v                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 coredns-668d6bf9bc-kxtqz                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-293977                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m27s
	  kube-system                 kube-apiserver-addons-293977                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-addons-293977        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-52h99                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-addons-293977                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m20s  kube-proxy       
	  Normal  Starting                 4m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m27s  kubelet          Node addons-293977 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s  kubelet          Node addons-293977 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s  kubelet          Node addons-293977 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m27s  kubelet          Node addons-293977 status is now: NodeReady
	  Normal  RegisteredNode           4m23s  node-controller  Node addons-293977 event: Registered Node addons-293977 in Controller
	
	
	==> dmesg <==
	[  +5.877699] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +0.014052] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.066829] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.055884] kauditd_printk_skb: 123 callbacks suppressed
	[  +6.006330] kauditd_printk_skb: 83 callbacks suppressed
	[ +14.594703] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.657966] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.342912] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 13:05] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.125888] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.109238] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.135961] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.551752] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.006679] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.540263] kauditd_printk_skb: 13 callbacks suppressed
	[ +14.960501] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.119946] kauditd_printk_skb: 18 callbacks suppressed
	[Jan27 13:06] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.092647] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.152367] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.425502] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.080361] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.648547] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.886160] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.349744] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [fbac5e1ab44e685d2a1e3b071d52b53f796ace79bdebc6bde55fc2fcd3ce34e8] <==
	{"level":"info","ts":"2025-01-27T13:05:18.066869Z","caller":"traceutil/trace.go:171","msg":"trace[667061009] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1074; }","duration":"110.543471ms","start":"2025-01-27T13:05:17.956321Z","end":"2025-01-27T13:05:18.066864Z","steps":["trace[667061009] 'range keys from in-memory index tree'  (duration: 110.430272ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:05:26.193809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.014194ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:05:26.193893Z","caller":"traceutil/trace.go:171","msg":"trace[1335273053] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1100; }","duration":"351.110916ms","start":"2025-01-27T13:05:25.842770Z","end":"2025-01-27T13:05:26.193881Z","steps":["trace[1335273053] 'range keys from in-memory index tree'  (duration: 351.004334ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:05:26.194046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.714396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:05:26.194083Z","caller":"traceutil/trace.go:171","msg":"trace[652285003] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1100; }","duration":"234.774853ms","start":"2025-01-27T13:05:25.959303Z","end":"2025-01-27T13:05:26.194077Z","steps":["trace[652285003] 'range keys from in-memory index tree'  (duration: 234.513125ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:05:54.175065Z","caller":"traceutil/trace.go:171","msg":"trace[1704729708] transaction","detail":"{read_only:false; response_revision:1295; number_of_response:1; }","duration":"113.4892ms","start":"2025-01-27T13:05:54.061563Z","end":"2025-01-27T13:05:54.175052Z","steps":["trace[1704729708] 'process raft request'  (duration: 113.343253ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:05:57.859699Z","caller":"traceutil/trace.go:171","msg":"trace[634939846] transaction","detail":"{read_only:false; response_revision:1303; number_of_response:1; }","duration":"137.268247ms","start":"2025-01-27T13:05:57.722420Z","end":"2025-01-27T13:05:57.859688Z","steps":["trace[634939846] 'process raft request'  (duration: 137.001374ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:05:57.859537Z","caller":"traceutil/trace.go:171","msg":"trace[165377730] linearizableReadLoop","detail":"{readStateIndex:1341; appliedIndex:1340; }","duration":"125.832641ms","start":"2025-01-27T13:05:57.733692Z","end":"2025-01-27T13:05:57.859525Z","steps":["trace[165377730] 'read index received'  (duration: 125.705422ms)","trace[165377730] 'applied index is now lower than readState.Index'  (duration: 126.689µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:05:57.860084Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.37715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
	{"level":"info","ts":"2025-01-27T13:05:57.861102Z","caller":"traceutil/trace.go:171","msg":"trace[543735854] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1303; }","duration":"127.416782ms","start":"2025-01-27T13:05:57.733672Z","end":"2025-01-27T13:05:57.861089Z","steps":["trace[543735854] 'agreement among raft nodes before linearized reading'  (duration: 126.337056ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:06:04.492657Z","caller":"traceutil/trace.go:171","msg":"trace[40877096] transaction","detail":"{read_only:false; response_revision:1353; number_of_response:1; }","duration":"115.328814ms","start":"2025-01-27T13:06:04.377313Z","end":"2025-01-27T13:06:04.492642Z","steps":["trace[40877096] 'process raft request'  (duration: 115.240167ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:06:04.493119Z","caller":"traceutil/trace.go:171","msg":"trace[70779126] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"107.756695ms","start":"2025-01-27T13:06:04.385300Z","end":"2025-01-27T13:06:04.493057Z","steps":["trace[70779126] 'read index received'  (duration: 107.75255ms)","trace[70779126] 'applied index is now lower than readState.Index'  (duration: 3.345µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:06:04.493221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.904252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:06:04.493364Z","caller":"traceutil/trace.go:171","msg":"trace[1573206472] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1353; }","duration":"108.038669ms","start":"2025-01-27T13:06:04.385277Z","end":"2025-01-27T13:06:04.493315Z","steps":["trace[1573206472] 'agreement among raft nodes before linearized reading'  (duration: 107.908332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:06:05.335871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.189948ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1349835962009265432 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.12\" mod_revision:1298 > success:<request_put:<key:\"/registry/masterleases/192.168.39.12\" value_size:66 lease:1349835962009265428 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.12\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T13:06:05.335955Z","caller":"traceutil/trace.go:171","msg":"trace[321513957] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"427.329985ms","start":"2025-01-27T13:06:04.908614Z","end":"2025-01-27T13:06:05.335944Z","steps":["trace[321513957] 'process raft request'  (duration: 225.801183ms)","trace[321513957] 'compare'  (duration: 201.037576ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:06:05.335994Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:06:04.908604Z","time spent":"427.370527ms","remote":"127.0.0.1:43428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.12\" mod_revision:1298 > success:<request_put:<key:\"/registry/masterleases/192.168.39.12\" value_size:66 lease:1349835962009265428 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.12\" > >"}
	{"level":"info","ts":"2025-01-27T13:06:05.339307Z","caller":"traceutil/trace.go:171","msg":"trace[878331023] linearizableReadLoop","detail":"{readStateIndex:1398; appliedIndex:1396; }","duration":"325.654635ms","start":"2025-01-27T13:06:05.013642Z","end":"2025-01-27T13:06:05.339297Z","steps":["trace[878331023] 'read index received'  (duration: 120.838284ms)","trace[878331023] 'applied index is now lower than readState.Index'  (duration: 204.814205ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T13:06:05.339400Z","caller":"traceutil/trace.go:171","msg":"trace[1571434326] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1358; }","duration":"419.527856ms","start":"2025-01-27T13:06:04.919868Z","end":"2025-01-27T13:06:05.339396Z","steps":["trace[1571434326] 'process raft request'  (duration: 419.267439ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:06:05.339501Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:06:04.919856Z","time spent":"419.575014ms","remote":"127.0.0.1:43630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":43,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/gadget/gadget\" mod_revision:570 > success:<request_delete_range:<key:\"/registry/serviceaccounts/gadget/gadget\" > > failure:<request_range:<key:\"/registry/serviceaccounts/gadget/gadget\" > >"}
	{"level":"warn","ts":"2025-01-27T13:06:05.339516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.051689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-27T13:06:05.339609Z","caller":"traceutil/trace.go:171","msg":"trace[371805410] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:0; response_revision:1358; }","duration":"279.168251ms","start":"2025-01-27T13:06:05.060434Z","end":"2025-01-27T13:06:05.339602Z","steps":["trace[371805410] 'agreement among raft nodes before linearized reading'  (duration: 279.051865ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:06:05.339709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.0644ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:06:05.341821Z","caller":"traceutil/trace.go:171","msg":"trace[965092756] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1358; }","duration":"328.191137ms","start":"2025-01-27T13:06:05.013622Z","end":"2025-01-27T13:06:05.341813Z","steps":["trace[965092756] 'agreement among raft nodes before linearized reading'  (duration: 326.071849ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:06:05.341947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:06:05.013607Z","time spent":"328.329523ms","remote":"127.0.0.1:43604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	
	
	==> kernel <==
	 13:08:33 up 5 min,  0 users,  load average: 0.78, 1.49, 0.76
	Linux addons-293977 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [17a12293bc7de4536beac5bf167816e62fb7ce55ee90299b4e43cc44a16ec50e] <==
	I0127 13:05:51.908481       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.170.170"}
	I0127 13:06:04.868587       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 13:06:06.432525       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 13:06:11.681455       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 13:06:11.877642       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.77.14"}
	I0127 13:06:13.455703       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0127 13:06:27.420219       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 13:06:27.426223       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 13:06:27.431500       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0127 13:06:42.434620       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 13:06:44.135052       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 13:06:44.135181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 13:06:44.169706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 13:06:44.169763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 13:06:44.209597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 13:06:44.209662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 13:06:44.277038       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 13:06:44.277569       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 13:06:44.310441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 13:06:44.310506       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 13:06:45.278082       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0127 13:06:45.310684       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 13:06:45.361102       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0127 13:06:53.946365       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 13:08:31.792551       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.145.186"}
	
	
	==> kube-controller-manager [80fd3b24c92d20b6140404d5e9aa534dd065bd3c0e896fe7536d16c7c631d145] <==
	E0127 13:07:29.102058       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 13:07:57.798848       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 13:07:57.799936       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 13:07:57.800916       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 13:07:57.800942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 13:08:05.901367       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 13:08:05.902282       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 13:08:05.903382       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 13:08:05.903431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 13:08:08.334640       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 13:08:08.335740       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 13:08:08.336663       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 13:08:08.336732       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 13:08:11.491275       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 13:08:11.492054       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 13:08:11.492830       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 13:08:11.492854       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 13:08:29.737097       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 13:08:29.738533       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 13:08:29.739428       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 13:08:29.739522       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 13:08:31.572443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="44.296658ms"
	I0127 13:08:31.597527       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="25.007893ms"
	I0127 13:08:31.632359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="34.785571ms"
	I0127 13:08:31.632564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="67.843µs"
	
	
	==> kube-proxy [61b98bc143e97904cccb969a6bab6f553d1fa1c0158bae3d558ca42e77e34984] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:04:12.897260       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:04:12.907485       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	E0127 13:04:12.907540       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:04:13.025230       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:04:13.025265       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:04:13.025285       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:04:13.047452       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:04:13.047729       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:04:13.047762       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:04:13.049446       1 config.go:199] "Starting service config controller"
	I0127 13:04:13.049475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:04:13.049511       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:04:13.049532       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:04:13.049878       1 config.go:329] "Starting node config controller"
	I0127 13:04:13.049909       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:04:13.152241       1 shared_informer.go:320] Caches are synced for node config
	I0127 13:04:13.152254       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:04:13.152268       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2a9e85b05729bab833eb805cf37ecad709ba1bb364d6fd2980b037fa889d8ff3] <==
	W0127 13:04:02.828846       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 13:04:02.828933       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:02.832706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:04:02.832818       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.643391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:04:03.643447       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.649774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:04:03.649846       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.654395       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 13:04:03.654433       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.659625       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 13:04:03.659643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.735002       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:04:03.735043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.737216       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:04:03.737285       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:04:03.885333       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:04:03.885388       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.889978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 13:04:03.890110       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.916274       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:04:03.916337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:04:03.931332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:04:03.931434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 13:04:06.617565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:08:05 addons-293977 kubelet[1235]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:08:05 addons-293977 kubelet[1235]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:08:05 addons-293977 kubelet[1235]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:08:05 addons-293977 kubelet[1235]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:08:06 addons-293977 kubelet[1235]: E0127 13:08:06.150860    1235 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983286150517288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:06 addons-293977 kubelet[1235]: E0127 13:08:06.150886    1235 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983286150517288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:10 addons-293977 kubelet[1235]: I0127 13:08:10.917921    1235 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 13:08:13 addons-293977 kubelet[1235]: I0127 13:08:13.918636    1235 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wrj9v" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 13:08:16 addons-293977 kubelet[1235]: E0127 13:08:16.153431    1235 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983296152913670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:16 addons-293977 kubelet[1235]: E0127 13:08:16.153526    1235 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983296152913670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:26 addons-293977 kubelet[1235]: E0127 13:08:26.156930    1235 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983306156340617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:26 addons-293977 kubelet[1235]: E0127 13:08:26.157397    1235 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983306156340617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.577584    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="liveness-probe"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578028    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="01a3dbe9-a3e0-4566-9bcc-e5076013f3f2" containerName="task-pv-container"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578089    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="448b5e4b-91fc-427e-a618-e00b5252243f" containerName="csi-attacher"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578122    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="4ad4e1e8-c433-464a-85a0-cbb51679c3be" containerName="local-path-provisioner"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578232    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="19ee5ad8-06a1-4624-9953-9efa38f15281" containerName="volume-snapshot-controller"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578266    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="bdccaf65-aa95-40ab-83d4-062e5b3b5228" containerName="csi-resizer"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578353    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="hostpath"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578438    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="node-driver-registrar"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578489    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="b1f58a82-5265-4fe6-956f-5d4b97b2ab1a" containerName="volume-snapshot-controller"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578519    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="csi-snapshotter"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578604    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="csi-provisioner"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.578634    1235 memory_manager.go:355] "RemoveStaleState removing state" podUID="8d12bed2-87a5-4810-a464-a3588a3d81da" containerName="csi-external-health-monitor-controller"
	Jan 27 13:08:31 addons-293977 kubelet[1235]: I0127 13:08:31.670453    1235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2gp\" (UniqueName: \"kubernetes.io/projected/f3d3a02e-4ae7-4857-90cb-f19a06b736fe-kube-api-access-6p2gp\") pod \"hello-world-app-7d9564db4-2lwj5\" (UID: \"f3d3a02e-4ae7-4857-90cb-f19a06b736fe\") " pod="default/hello-world-app-7d9564db4-2lwj5"
	
	
	==> storage-provisioner [486d257962ff729d348d69eeb1bc32c0a4638d3a0932dbb2d5f8be53aae73936] <==
	I0127 13:04:18.308069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:04:18.342487       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:04:18.342550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:04:18.361296       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:04:18.361428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-293977_9ab46ee7-00ec-48ec-9eca-df3aca9f4e2a!
	I0127 13:04:18.371775       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b342e72-8644-4173-86ef-19fd7261f2bc", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-293977_9ab46ee7-00ec-48ec-9eca-df3aca9f4e2a became leader
	I0127 13:04:18.484439       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-293977_9ab46ee7-00ec-48ec-9eca-df3aca9f4e2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-293977 -n addons-293977
helpers_test.go:261: (dbg) Run:  kubectl --context addons-293977 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-97cts ingress-nginx-admission-patch-6pthl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-293977 describe pod ingress-nginx-admission-create-97cts ingress-nginx-admission-patch-6pthl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-293977 describe pod ingress-nginx-admission-create-97cts ingress-nginx-admission-patch-6pthl: exit status 1 (54.291684ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-97cts" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6pthl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-293977 describe pod ingress-nginx-admission-create-97cts ingress-nginx-admission-patch-6pthl: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable ingress-dns --alsologtostderr -v=1: (1.684817153s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable ingress --alsologtostderr -v=1: (7.761100096s)
--- FAIL: TestAddons/parallel/Ingress (152.27s)

                                                
                                    
x
+
TestPreload (163.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-585145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-585145 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.595962023s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-585145 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-585145 image pull gcr.io/k8s-minikube/busybox: (2.296217651s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-585145
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-585145: (7.284642641s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-585145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0127 13:58:28.672938  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-585145 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.698553889s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-585145 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 13:59:28.719516828 +0000 UTC m=+3377.628405454
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-585145 -n test-preload-585145
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-585145 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-585145 logs -n 25: (1.002259657s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-268241 ssh -n                                                                 | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	|         | multinode-268241-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-268241 ssh -n multinode-268241 sudo cat                                       | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	|         | /home/docker/cp-test_multinode-268241-m03_multinode-268241.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-268241 cp multinode-268241-m03:/home/docker/cp-test.txt                       | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	|         | multinode-268241-m02:/home/docker/cp-test_multinode-268241-m03_multinode-268241-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-268241 ssh -n                                                                 | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	|         | multinode-268241-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-268241 ssh -n multinode-268241-m02 sudo cat                                   | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	|         | /home/docker/cp-test_multinode-268241-m03_multinode-268241-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-268241 node stop m03                                                          | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:44 UTC |
	| node    | multinode-268241 node start                                                             | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:44 UTC | 27 Jan 25 13:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-268241                                                                | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:45 UTC |                     |
	| stop    | -p multinode-268241                                                                     | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:45 UTC | 27 Jan 25 13:48 UTC |
	| start   | -p multinode-268241                                                                     | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:48 UTC | 27 Jan 25 13:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-268241                                                                | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:51 UTC |                     |
	| node    | multinode-268241 node delete                                                            | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:51 UTC | 27 Jan 25 13:51 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-268241 stop                                                                   | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:51 UTC | 27 Jan 25 13:54 UTC |
	| start   | -p multinode-268241                                                                     | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:54 UTC | 27 Jan 25 13:56 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-268241                                                                | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC |                     |
	| start   | -p multinode-268241-m02                                                                 | multinode-268241-m02 | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-268241-m03                                                                 | multinode-268241-m03 | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC | 27 Jan 25 13:56 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-268241                                                                 | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC |                     |
	| delete  | -p multinode-268241-m03                                                                 | multinode-268241-m03 | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC | 27 Jan 25 13:56 UTC |
	| delete  | -p multinode-268241                                                                     | multinode-268241     | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC | 27 Jan 25 13:56 UTC |
	| start   | -p test-preload-585145                                                                  | test-preload-585145  | jenkins | v1.35.0 | 27 Jan 25 13:56 UTC | 27 Jan 25 13:58 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-585145 image pull                                                          | test-preload-585145  | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-585145                                                                  | test-preload-585145  | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:58 UTC |
	| start   | -p test-preload-585145                                                                  | test-preload-585145  | jenkins | v1.35.0 | 27 Jan 25 13:58 UTC | 27 Jan 25 13:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-585145 image list                                                          | test-preload-585145  | jenkins | v1.35.0 | 27 Jan 25 13:59 UTC | 27 Jan 25 13:59 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:58:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:58:26.851294  592993 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:58:26.851413  592993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:26.851424  592993 out.go:358] Setting ErrFile to fd 2...
	I0127 13:58:26.851428  592993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:58:26.851575  592993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:58:26.852118  592993 out.go:352] Setting JSON to false
	I0127 13:58:26.853060  592993 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":16852,"bootTime":1737969455,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:58:26.853130  592993 start.go:139] virtualization: kvm guest
	I0127 13:58:26.854935  592993 out.go:177] * [test-preload-585145] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:58:26.856014  592993 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:58:26.856013  592993 notify.go:220] Checking for updates...
	I0127 13:58:26.857964  592993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:58:26.858960  592993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:58:26.859908  592993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:58:26.860981  592993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:58:26.862485  592993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:58:26.863909  592993 config.go:182] Loaded profile config "test-preload-585145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:58:26.864280  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:58:26.864330  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:58:26.879160  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0127 13:58:26.879517  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:58:26.880214  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:58:26.880247  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:58:26.880589  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:58:26.880797  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:26.882271  592993 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 13:58:26.883505  592993 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:58:26.883765  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:58:26.883799  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:58:26.897879  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0127 13:58:26.898286  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:58:26.898731  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:58:26.898752  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:58:26.899036  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:58:26.899223  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:26.932621  592993 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:58:26.933572  592993 start.go:297] selected driver: kvm2
	I0127 13:58:26.933611  592993 start.go:901] validating driver "kvm2" against &{Name:test-preload-585145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-585145
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:58:26.933746  592993 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:58:26.934398  592993 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:58:26.934488  592993 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:58:26.948407  592993 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:58:26.948735  592993 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:58:26.948768  592993 cni.go:84] Creating CNI manager for ""
	I0127 13:58:26.948817  592993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:58:26.948862  592993 start.go:340] cluster config:
	{Name:test-preload-585145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-585145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:58:26.948950  592993 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:58:26.950312  592993 out.go:177] * Starting "test-preload-585145" primary control-plane node in "test-preload-585145" cluster
	I0127 13:58:26.951383  592993 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:58:26.975904  592993 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 13:58:26.975923  592993 cache.go:56] Caching tarball of preloaded images
	I0127 13:58:26.976046  592993 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:58:26.977310  592993 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 13:58:26.978322  592993 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:58:27.000046  592993 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 13:58:30.716042  592993 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:58:30.716131  592993 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:58:31.570128  592993 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 13:58:31.570261  592993 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/config.json ...
	I0127 13:58:31.570481  592993 start.go:360] acquireMachinesLock for test-preload-585145: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:58:31.570544  592993 start.go:364] duration metric: took 41.118µs to acquireMachinesLock for "test-preload-585145"
	I0127 13:58:31.570559  592993 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:58:31.570566  592993 fix.go:54] fixHost starting: 
	I0127 13:58:31.570818  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:58:31.570855  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:58:31.586480  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I0127 13:58:31.586989  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:58:31.587586  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:58:31.587612  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:58:31.587964  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:58:31.588152  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:31.588326  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetState
	I0127 13:58:31.590054  592993 fix.go:112] recreateIfNeeded on test-preload-585145: state=Stopped err=<nil>
	I0127 13:58:31.590072  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	W0127 13:58:31.590224  592993 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:58:31.591865  592993 out.go:177] * Restarting existing kvm2 VM for "test-preload-585145" ...
	I0127 13:58:31.592850  592993 main.go:141] libmachine: (test-preload-585145) Calling .Start
	I0127 13:58:31.592992  592993 main.go:141] libmachine: (test-preload-585145) starting domain...
	I0127 13:58:31.593010  592993 main.go:141] libmachine: (test-preload-585145) ensuring networks are active...
	I0127 13:58:31.593834  592993 main.go:141] libmachine: (test-preload-585145) Ensuring network default is active
	I0127 13:58:31.594173  592993 main.go:141] libmachine: (test-preload-585145) Ensuring network mk-test-preload-585145 is active
	I0127 13:58:31.594555  592993 main.go:141] libmachine: (test-preload-585145) getting domain XML...
	I0127 13:58:31.595265  592993 main.go:141] libmachine: (test-preload-585145) creating domain...
	I0127 13:58:31.913978  592993 main.go:141] libmachine: (test-preload-585145) waiting for IP...
	I0127 13:58:31.914842  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:31.915164  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:31.915240  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:31.915156  593044 retry.go:31] will retry after 263.35314ms: waiting for domain to come up
	I0127 13:58:32.180700  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:32.181111  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:32.181143  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:32.181077  593044 retry.go:31] will retry after 242.580343ms: waiting for domain to come up
	I0127 13:58:32.425658  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:32.426051  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:32.426109  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:32.426044  593044 retry.go:31] will retry after 407.680814ms: waiting for domain to come up
	I0127 13:58:32.835560  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:32.835938  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:32.835970  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:32.835900  593044 retry.go:31] will retry after 592.428545ms: waiting for domain to come up
	I0127 13:58:33.429536  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:33.429931  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:33.429968  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:33.429889  593044 retry.go:31] will retry after 684.753467ms: waiting for domain to come up
	I0127 13:58:34.116794  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:34.117241  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:34.117274  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:34.117207  593044 retry.go:31] will retry after 842.338122ms: waiting for domain to come up
	I0127 13:58:34.961150  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:34.961555  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:34.961635  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:34.961543  593044 retry.go:31] will retry after 1.007183491s: waiting for domain to come up
	I0127 13:58:35.970650  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:35.971068  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:35.971098  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:35.971031  593044 retry.go:31] will retry after 1.160143008s: waiting for domain to come up
	I0127 13:58:37.132502  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:37.132851  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:37.132928  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:37.132853  593044 retry.go:31] will retry after 1.449797567s: waiting for domain to come up
	I0127 13:58:38.584317  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:38.584771  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:38.584805  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:38.584719  593044 retry.go:31] will retry after 1.65279836s: waiting for domain to come up
	I0127 13:58:40.239531  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:40.240046  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:40.240078  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:40.240010  593044 retry.go:31] will retry after 2.568924883s: waiting for domain to come up
	I0127 13:58:42.810094  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:42.810509  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:42.810541  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:42.810469  593044 retry.go:31] will retry after 2.496743023s: waiting for domain to come up
	I0127 13:58:45.310072  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:45.310505  592993 main.go:141] libmachine: (test-preload-585145) DBG | unable to find current IP address of domain test-preload-585145 in network mk-test-preload-585145
	I0127 13:58:45.310534  592993 main.go:141] libmachine: (test-preload-585145) DBG | I0127 13:58:45.310462  593044 retry.go:31] will retry after 4.217124453s: waiting for domain to come up
	I0127 13:58:49.531164  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.531680  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has current primary IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.531701  592993 main.go:141] libmachine: (test-preload-585145) found domain IP: 192.168.39.201
	I0127 13:58:49.531714  592993 main.go:141] libmachine: (test-preload-585145) reserving static IP address...
	I0127 13:58:49.532127  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "test-preload-585145", mac: "52:54:00:93:bb:61", ip: "192.168.39.201"} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.532155  592993 main.go:141] libmachine: (test-preload-585145) DBG | skip adding static IP to network mk-test-preload-585145 - found existing host DHCP lease matching {name: "test-preload-585145", mac: "52:54:00:93:bb:61", ip: "192.168.39.201"}
	I0127 13:58:49.532170  592993 main.go:141] libmachine: (test-preload-585145) reserved static IP address 192.168.39.201 for domain test-preload-585145
	I0127 13:58:49.532187  592993 main.go:141] libmachine: (test-preload-585145) waiting for SSH...
	I0127 13:58:49.532203  592993 main.go:141] libmachine: (test-preload-585145) DBG | Getting to WaitForSSH function...
	I0127 13:58:49.534248  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.534570  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.534597  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.534743  592993 main.go:141] libmachine: (test-preload-585145) DBG | Using SSH client type: external
	I0127 13:58:49.534785  592993 main.go:141] libmachine: (test-preload-585145) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa (-rw-------)
	I0127 13:58:49.534837  592993 main.go:141] libmachine: (test-preload-585145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:58:49.534853  592993 main.go:141] libmachine: (test-preload-585145) DBG | About to run SSH command:
	I0127 13:58:49.534892  592993 main.go:141] libmachine: (test-preload-585145) DBG | exit 0
	I0127 13:58:49.661027  592993 main.go:141] libmachine: (test-preload-585145) DBG | SSH cmd err, output: <nil>: 
	I0127 13:58:49.661331  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetConfigRaw
	I0127 13:58:49.661959  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetIP
	I0127 13:58:49.664224  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.664610  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.664643  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.664905  592993 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/config.json ...
	I0127 13:58:49.665090  592993 machine.go:93] provisionDockerMachine start ...
	I0127 13:58:49.665113  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:49.665337  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:49.667442  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.667736  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.667755  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.667891  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:49.668058  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.668203  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.668315  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:49.668435  592993 main.go:141] libmachine: Using SSH client type: native
	I0127 13:58:49.668674  592993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0127 13:58:49.668686  592993 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:58:49.777340  592993 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:58:49.777378  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetMachineName
	I0127 13:58:49.777587  592993 buildroot.go:166] provisioning hostname "test-preload-585145"
	I0127 13:58:49.777617  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetMachineName
	I0127 13:58:49.777802  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:49.780177  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.780464  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.780491  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.780627  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:49.780801  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.780965  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.781092  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:49.781269  592993 main.go:141] libmachine: Using SSH client type: native
	I0127 13:58:49.781445  592993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0127 13:58:49.781457  592993 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-585145 && echo "test-preload-585145" | sudo tee /etc/hostname
	I0127 13:58:49.901982  592993 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-585145
	
	I0127 13:58:49.902005  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:49.904289  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.904581  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:49.904609  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:49.904795  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:49.904954  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.905106  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:49.905244  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:49.905404  592993 main.go:141] libmachine: Using SSH client type: native
	I0127 13:58:49.905558  592993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0127 13:58:49.905573  592993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-585145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-585145/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-585145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:58:50.021391  592993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:58:50.021424  592993 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 13:58:50.021472  592993 buildroot.go:174] setting up certificates
	I0127 13:58:50.021485  592993 provision.go:84] configureAuth start
	I0127 13:58:50.021499  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetMachineName
	I0127 13:58:50.021713  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetIP
	I0127 13:58:50.023999  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.024317  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.024348  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.024475  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.026484  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.026853  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.026887  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.026969  592993 provision.go:143] copyHostCerts
	I0127 13:58:50.027023  592993 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 13:58:50.027041  592993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 13:58:50.027100  592993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 13:58:50.027196  592993 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 13:58:50.027205  592993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 13:58:50.027231  592993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 13:58:50.027293  592993 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 13:58:50.027300  592993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 13:58:50.027321  592993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 13:58:50.027369  592993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.test-preload-585145 san=[127.0.0.1 192.168.39.201 localhost minikube test-preload-585145]
	I0127 13:58:50.323606  592993 provision.go:177] copyRemoteCerts
	I0127 13:58:50.323653  592993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:58:50.323673  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.326066  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.326357  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.326377  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.326546  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.326737  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.326889  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.327020  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:58:50.411030  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:58:50.437455  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:58:50.463349  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 13:58:50.489078  592993 provision.go:87] duration metric: took 467.584179ms to configureAuth
	I0127 13:58:50.489106  592993 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:58:50.489240  592993 config.go:182] Loaded profile config "test-preload-585145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:58:50.489307  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.491501  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.491788  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.491817  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.491985  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.492157  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.492300  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.492417  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.492533  592993 main.go:141] libmachine: Using SSH client type: native
	I0127 13:58:50.492677  592993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0127 13:58:50.492694  592993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:58:50.727393  592993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:58:50.727424  592993 machine.go:96] duration metric: took 1.062317452s to provisionDockerMachine
	I0127 13:58:50.727437  592993 start.go:293] postStartSetup for "test-preload-585145" (driver="kvm2")
	I0127 13:58:50.727452  592993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:58:50.727481  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:50.727767  592993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:58:50.727808  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.730333  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.730670  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.730698  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.730856  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.731053  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.731222  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.731353  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:58:50.815199  592993 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:58:50.819404  592993 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:58:50.819422  592993 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 13:58:50.819486  592993 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 13:58:50.819583  592993 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 13:58:50.819709  592993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:58:50.828669  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 13:58:50.851489  592993 start.go:296] duration metric: took 124.040638ms for postStartSetup
	I0127 13:58:50.851524  592993 fix.go:56] duration metric: took 19.280958585s for fixHost
	I0127 13:58:50.851542  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.853768  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.854047  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.854081  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.854196  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.854402  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.854569  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.854697  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.854844  592993 main.go:141] libmachine: Using SSH client type: native
	I0127 13:58:50.855028  592993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0127 13:58:50.855044  592993 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:58:50.961738  592993 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986330.937372715
	
	I0127 13:58:50.961754  592993 fix.go:216] guest clock: 1737986330.937372715
	I0127 13:58:50.961760  592993 fix.go:229] Guest: 2025-01-27 13:58:50.937372715 +0000 UTC Remote: 2025-01-27 13:58:50.851528559 +0000 UTC m=+24.038308852 (delta=85.844156ms)
	I0127 13:58:50.961802  592993 fix.go:200] guest clock delta is within tolerance: 85.844156ms
	I0127 13:58:50.961809  592993 start.go:83] releasing machines lock for "test-preload-585145", held for 19.391254293s
	I0127 13:58:50.961832  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:50.962041  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetIP
	I0127 13:58:50.964201  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.964558  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.964591  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.964688  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:50.965087  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:50.965267  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:58:50.965381  592993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:58:50.965427  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.965456  592993 ssh_runner.go:195] Run: cat /version.json
	I0127 13:58:50.965479  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:58:50.967909  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.968283  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.968311  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.968328  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.968426  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.968588  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.968729  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.968738  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:50.968773  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:50.968861  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:58:50.968909  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:58:50.969094  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:58:50.969244  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:58:50.969358  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:58:51.063870  592993 ssh_runner.go:195] Run: systemctl --version
	I0127 13:58:51.069585  592993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:58:51.209467  592993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:58:51.215613  592993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:58:51.215668  592993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:58:51.230854  592993 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:58:51.230871  592993 start.go:495] detecting cgroup driver to use...
	I0127 13:58:51.230917  592993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:58:51.247616  592993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:58:51.261458  592993 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:58:51.261488  592993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:58:51.274484  592993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:58:51.287190  592993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:58:51.408361  592993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:58:51.537690  592993 docker.go:233] disabling docker service ...
	I0127 13:58:51.537742  592993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:58:51.551312  592993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:58:51.563798  592993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:58:51.690710  592993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:58:51.817742  592993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:58:51.830083  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:58:51.847337  592993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 13:58:51.847401  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.857678  592993 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:58:51.857742  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.867802  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.882586  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.896669  592993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:58:51.907079  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.916835  592993 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.932820  592993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:58:51.942798  592993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:58:51.951701  592993 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:58:51.951751  592993 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:58:51.964677  592993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:58:51.973766  592993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:58:52.103623  592993 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:58:52.192586  592993 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:58:52.192664  592993 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:58:52.197398  592993 start.go:563] Will wait 60s for crictl version
	I0127 13:58:52.197444  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:52.200974  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:58:52.237521  592993 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:58:52.237607  592993 ssh_runner.go:195] Run: crio --version
	I0127 13:58:52.262838  592993 ssh_runner.go:195] Run: crio --version
	I0127 13:58:52.288916  592993 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 13:58:52.290016  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetIP
	I0127 13:58:52.292855  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:52.293185  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:58:52.293207  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:58:52.293418  592993 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:58:52.297320  592993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:58:52.309391  592993 kubeadm.go:883] updating cluster {Name:test-preload-585145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-585145 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:58:52.309488  592993 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:58:52.309525  592993 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:58:52.350751  592993 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 13:58:52.350799  592993 ssh_runner.go:195] Run: which lz4
	I0127 13:58:52.354344  592993 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:58:52.358165  592993 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:58:52.358197  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 13:58:53.869843  592993 crio.go:462] duration metric: took 1.515518981s to copy over tarball
	I0127 13:58:53.869931  592993 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:58:56.253552  592993 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.383577933s)
	I0127 13:58:56.253604  592993 crio.go:469] duration metric: took 2.383723724s to extract the tarball
	I0127 13:58:56.253615  592993 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:58:56.294594  592993 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:58:56.340454  592993 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 13:58:56.340481  592993 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:58:56.340562  592993 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:58:56.340562  592993 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.340562  592993 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.340599  592993 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.340633  592993 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.340636  592993 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.340663  592993 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.340681  592993 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 13:58:56.342154  592993 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.342233  592993 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 13:58:56.342446  592993 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.342463  592993 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:58:56.342463  592993 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.342467  592993 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.342446  592993 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.342475  592993 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.495943  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 13:58:56.507347  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.510893  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.513417  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.514293  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.516256  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.537398  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.558573  592993 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 13:58:56.558634  592993 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 13:58:56.558679  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.655820  592993 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 13:58:56.655877  592993 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.655910  592993 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 13:58:56.655935  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.655954  592993 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.655965  592993 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 13:58:56.655987  592993 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.655999  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.656020  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.656088  592993 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 13:58:56.656120  592993 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.656156  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.660335  592993 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 13:58:56.660365  592993 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.660395  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.679862  592993 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 13:58:56.679894  592993 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.679926  592993 ssh_runner.go:195] Run: which crictl
	I0127 13:58:56.679928  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:58:56.679982  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.680047  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.680075  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.680134  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.680134  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.683952  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.822246  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:58:56.822246  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.822255  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.830731  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.831641  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.831750  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.831781  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:56.957742  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:58:56.957857  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:58:56.957886  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:58:56.972947  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:58:56.986587  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:58:56.986638  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:58:56.986693  592993 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:58:57.047852  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 13:58:57.047995  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 13:58:57.078804  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 13:58:57.078868  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 13:58:57.078911  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:58:57.078962  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:58:57.101283  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 13:58:57.101389  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:58:57.125077  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 13:58:57.125190  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:58:57.125510  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 13:58:57.125615  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:58:57.129528  592993 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 13:58:57.129542  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 13:58:57.129550  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 13:58:57.129559  592993 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 13:58:57.129573  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 13:58:57.129588  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 13:58:57.129588  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 13:58:57.129627  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 13:58:57.129638  592993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:58:57.131289  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 13:58:57.133740  592993 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 13:58:57.262645  592993 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:58:59.881317  592993 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.61863116s)
	I0127 13:58:59.881335  592993 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.751673262s)
	I0127 13:58:59.881376  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 13:58:59.881404  592993 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:58:59.881466  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:59:01.939435  592993 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.057940983s)
	I0127 13:59:01.939464  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 13:59:01.939492  592993 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:59:01.939530  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:59:02.681289  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 13:59:02.681353  592993 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:59:02.681408  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:59:03.020196  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 13:59:03.020249  592993 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:59:03.020313  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:59:03.767634  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 13:59:03.767686  592993 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:59:03.767738  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:59:04.606030  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 13:59:04.606087  592993 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:59:04.606162  592993 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:59:05.050041  592993 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 13:59:05.050113  592993 cache_images.go:123] Successfully loaded all cached images
	I0127 13:59:05.050121  592993 cache_images.go:92] duration metric: took 8.709627145s to LoadCachedImages
	I0127 13:59:05.050141  592993 kubeadm.go:934] updating node { 192.168.39.201 8443 v1.24.4 crio true true} ...
	I0127 13:59:05.050288  592993 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-585145 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-585145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:59:05.050398  592993 ssh_runner.go:195] Run: crio config
	I0127 13:59:05.096014  592993 cni.go:84] Creating CNI manager for ""
	I0127 13:59:05.096033  592993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:59:05.096043  592993 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:59:05.096061  592993 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-585145 NodeName:test-preload-585145 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:59:05.096211  592993 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-585145"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:59:05.096289  592993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 13:59:05.106476  592993 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:59:05.106543  592993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:59:05.115844  592993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0127 13:59:05.131618  592993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:59:05.147032  592993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0127 13:59:05.163204  592993 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I0127 13:59:05.166758  592993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:59:05.178593  592993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:59:05.288867  592993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:59:05.304907  592993 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145 for IP: 192.168.39.201
	I0127 13:59:05.304928  592993 certs.go:194] generating shared ca certs ...
	I0127 13:59:05.304949  592993 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:05.305157  592993 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 13:59:05.305222  592993 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 13:59:05.305236  592993 certs.go:256] generating profile certs ...
	I0127 13:59:05.305351  592993 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/client.key
	I0127 13:59:05.305437  592993 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/apiserver.key.e1160350
	I0127 13:59:05.305494  592993 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/proxy-client.key
	I0127 13:59:05.305665  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 13:59:05.305707  592993 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 13:59:05.305720  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 13:59:05.305752  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:59:05.305785  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:59:05.305822  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 13:59:05.305891  592993 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 13:59:05.306869  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:59:05.337370  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:59:05.362145  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:59:05.397234  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:59:05.430835  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 13:59:05.460747  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:59:05.508330  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:59:05.532380  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:59:05.555713  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:59:05.578312  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 13:59:05.600982  592993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 13:59:05.623892  592993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:59:05.639606  592993 ssh_runner.go:195] Run: openssl version
	I0127 13:59:05.645064  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:59:05.655082  592993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:05.659392  592993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:05.659425  592993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:59:05.664814  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:59:05.674761  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 13:59:05.684499  592993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 13:59:05.688759  592993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 13:59:05.688809  592993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 13:59:05.694263  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 13:59:05.704075  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 13:59:05.714114  592993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 13:59:05.718436  592993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 13:59:05.718489  592993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 13:59:05.723818  592993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:59:05.733661  592993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:59:05.737900  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:59:05.743460  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:59:05.748919  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:59:05.754511  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:59:05.760042  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:59:05.765517  592993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:59:05.770939  592993 kubeadm.go:392] StartCluster: {Name:test-preload-585145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-585145 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:59:05.771060  592993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:59:05.771101  592993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:59:05.810000  592993 cri.go:89] found id: ""
	I0127 13:59:05.810042  592993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:59:05.819359  592993 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:59:05.819375  592993 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:59:05.819410  592993 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:59:05.828587  592993 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:59:05.829215  592993 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-585145" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:59:05.829332  592993 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-555419/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-585145" cluster setting kubeconfig missing "test-preload-585145" context setting]
	I0127 13:59:05.829631  592993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:05.830304  592993 kapi.go:59] client config for test-preload-585145: &rest.Config{Host:"https://192.168.39.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/client.crt", KeyFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/client.key", CAFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 13:59:05.831010  592993 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:59:05.839835  592993 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.201
	I0127 13:59:05.839866  592993 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:59:05.839876  592993 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:59:05.839909  592993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:59:05.873889  592993 cri.go:89] found id: ""
	I0127 13:59:05.873932  592993 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:59:05.888494  592993 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:59:05.897462  592993 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:59:05.897483  592993 kubeadm.go:157] found existing configuration files:
	
	I0127 13:59:05.897514  592993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:59:05.906035  592993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:59:05.906074  592993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:59:05.914819  592993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:59:05.923369  592993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:59:05.923401  592993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:59:05.932117  592993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:59:05.940524  592993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:59:05.940556  592993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:59:05.949262  592993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:59:05.957606  592993 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:59:05.957653  592993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:59:05.966353  592993 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:59:05.975231  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:06.059203  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:06.825671  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:07.067339  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:07.142382  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:07.256629  592993 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:59:07.256712  592993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:59:07.757664  592993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:59:08.257342  592993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:59:08.275039  592993 api_server.go:72] duration metric: took 1.018407457s to wait for apiserver process to appear ...
	I0127 13:59:08.275079  592993 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:59:08.275105  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:08.275649  592993 api_server.go:269] stopped: https://192.168.39.201:8443/healthz: Get "https://192.168.39.201:8443/healthz": dial tcp 192.168.39.201:8443: connect: connection refused
	I0127 13:59:08.775317  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:11.740828  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:59:11.740861  592993 api_server.go:103] status: https://192.168.39.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:59:11.740880  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:11.787997  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:59:11.788031  592993 api_server.go:103] status: https://192.168.39.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:59:11.788054  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:11.811034  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:59:11.811076  592993 api_server.go:103] status: https://192.168.39.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:59:12.275678  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:12.281021  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:59:12.281052  592993 api_server.go:103] status: https://192.168.39.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:59:12.775720  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:12.789014  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:59:12.789048  592993 api_server.go:103] status: https://192.168.39.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:59:13.275675  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:13.280710  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I0127 13:59:13.286978  592993 api_server.go:141] control plane version: v1.24.4
	I0127 13:59:13.287009  592993 api_server.go:131] duration metric: took 5.01192169s to wait for apiserver health ...
	I0127 13:59:13.287022  592993 cni.go:84] Creating CNI manager for ""
	I0127 13:59:13.287031  592993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:59:13.288577  592993 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:59:13.289764  592993 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:59:13.306154  592993 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:59:13.324560  592993 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:59:13.324651  592993 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 13:59:13.324673  592993 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 13:59:13.337831  592993 system_pods.go:59] 7 kube-system pods found
	I0127 13:59:13.337866  592993 system_pods.go:61] "coredns-6d4b75cb6d-g886z" [85c7b552-3cd2-4c11-ad8e-899054f17522] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:59:13.337875  592993 system_pods.go:61] "etcd-test-preload-585145" [e8e2a198-9ba7-4957-8997-1897bd9b7518] Running
	I0127 13:59:13.337893  592993 system_pods.go:61] "kube-apiserver-test-preload-585145" [2b61e68d-7bc2-4c87-b524-741254ded875] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:59:13.337910  592993 system_pods.go:61] "kube-controller-manager-test-preload-585145" [2e18b28f-4565-4156-bb2b-1a784f95112d] Running
	I0127 13:59:13.337924  592993 system_pods.go:61] "kube-proxy-prcb4" [99dacbcb-c4cf-4986-aa67-8881895df306] Running
	I0127 13:59:13.337935  592993 system_pods.go:61] "kube-scheduler-test-preload-585145" [105eb0a5-4ae2-4690-9c8a-604c9b3b877c] Running
	I0127 13:59:13.337949  592993 system_pods.go:61] "storage-provisioner" [47f79f66-a24a-47d3-8f2b-2790f1d92ab4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:59:13.337960  592993 system_pods.go:74] duration metric: took 13.379531ms to wait for pod list to return data ...
	I0127 13:59:13.337972  592993 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:59:13.341385  592993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:59:13.341417  592993 node_conditions.go:123] node cpu capacity is 2
	I0127 13:59:13.341430  592993 node_conditions.go:105] duration metric: took 3.448655ms to run NodePressure ...
	I0127 13:59:13.341465  592993 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:59:13.552573  592993 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:59:13.556532  592993 kubeadm.go:739] kubelet initialised
	I0127 13:59:13.556548  592993 kubeadm.go:740] duration metric: took 3.950806ms waiting for restarted kubelet to initialise ...
	I0127 13:59:13.556557  592993 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:59:13.562612  592993 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:13.569906  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.569934  592993 pod_ready.go:82] duration metric: took 7.295699ms for pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:13.569949  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.569961  592993 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:13.575627  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "etcd-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.575648  592993 pod_ready.go:82] duration metric: took 5.677127ms for pod "etcd-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:13.575658  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "etcd-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.575667  592993 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:13.585226  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "kube-apiserver-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.585252  592993 pod_ready.go:82] duration metric: took 9.571728ms for pod "kube-apiserver-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:13.585263  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "kube-apiserver-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.585282  592993 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:13.728215  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.728249  592993 pod_ready.go:82] duration metric: took 142.945462ms for pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:13.728258  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:13.728264  592993 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-prcb4" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:14.127283  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "kube-proxy-prcb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:14.127309  592993 pod_ready.go:82] duration metric: took 399.033075ms for pod "kube-proxy-prcb4" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:14.127318  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "kube-proxy-prcb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:14.127324  592993 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:14.528032  592993 pod_ready.go:98] node "test-preload-585145" hosting pod "kube-scheduler-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:14.528059  592993 pod_ready.go:82] duration metric: took 400.728361ms for pod "kube-scheduler-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	E0127 13:59:14.528068  592993 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-585145" hosting pod "kube-scheduler-test-preload-585145" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:14.528076  592993 pod_ready.go:39] duration metric: took 971.507812ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:59:14.528109  592993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:59:14.539992  592993 ops.go:34] apiserver oom_adj: -16
	I0127 13:59:14.540016  592993 kubeadm.go:597] duration metric: took 8.720632956s to restartPrimaryControlPlane
	I0127 13:59:14.540026  592993 kubeadm.go:394] duration metric: took 8.769092919s to StartCluster
	I0127 13:59:14.540064  592993 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:14.540138  592993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:59:14.540778  592993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:59:14.541004  592993 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:59:14.541050  592993 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:59:14.541154  592993 addons.go:69] Setting storage-provisioner=true in profile "test-preload-585145"
	I0127 13:59:14.541167  592993 addons.go:69] Setting default-storageclass=true in profile "test-preload-585145"
	I0127 13:59:14.541177  592993 addons.go:238] Setting addon storage-provisioner=true in "test-preload-585145"
	W0127 13:59:14.541189  592993 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:59:14.541192  592993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-585145"
	I0127 13:59:14.541238  592993 host.go:66] Checking if "test-preload-585145" exists ...
	I0127 13:59:14.541240  592993 config.go:182] Loaded profile config "test-preload-585145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:59:14.541555  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:59:14.541613  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:59:14.541682  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:59:14.541734  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:59:14.542595  592993 out.go:177] * Verifying Kubernetes components...
	I0127 13:59:14.543928  592993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:59:14.556997  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0127 13:59:14.557043  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I0127 13:59:14.557443  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:59:14.557497  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:59:14.557968  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:59:14.557985  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:59:14.558113  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:59:14.558133  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:59:14.558346  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:59:14.558593  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:59:14.558779  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetState
	I0127 13:59:14.558907  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:59:14.558950  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:59:14.561098  592993 kapi.go:59] client config for test-preload-585145: &rest.Config{Host:"https://192.168.39.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/client.crt", KeyFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/profiles/test-preload-585145/client.key", CAFile:"/home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 13:59:14.561507  592993 addons.go:238] Setting addon default-storageclass=true in "test-preload-585145"
	W0127 13:59:14.561531  592993 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:59:14.561561  592993 host.go:66] Checking if "test-preload-585145" exists ...
	I0127 13:59:14.561969  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:59:14.562019  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:59:14.574648  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0127 13:59:14.575024  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:59:14.575490  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:59:14.575509  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:59:14.575799  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0127 13:59:14.575869  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:59:14.576096  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetState
	I0127 13:59:14.576222  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:59:14.576772  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:59:14.576797  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:59:14.577132  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:59:14.577710  592993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:59:14.577749  592993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:59:14.577822  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:59:14.579394  592993 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:59:14.580616  592993 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:59:14.580634  592993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:59:14.580649  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:59:14.583583  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:59:14.584016  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:59:14.584045  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:59:14.584193  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:59:14.584364  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:59:14.584502  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:59:14.584632  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:59:14.612078  592993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0127 13:59:14.612460  592993 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:59:14.612966  592993 main.go:141] libmachine: Using API Version  1
	I0127 13:59:14.612995  592993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:59:14.613318  592993 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:59:14.613549  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetState
	I0127 13:59:14.614949  592993 main.go:141] libmachine: (test-preload-585145) Calling .DriverName
	I0127 13:59:14.615164  592993 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:59:14.615183  592993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:59:14.615201  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHHostname
	I0127 13:59:14.617929  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:59:14.618315  592993 main.go:141] libmachine: (test-preload-585145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:bb:61", ip: ""} in network mk-test-preload-585145: {Iface:virbr1 ExpiryTime:2025-01-27 14:58:42 +0000 UTC Type:0 Mac:52:54:00:93:bb:61 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:test-preload-585145 Clientid:01:52:54:00:93:bb:61}
	I0127 13:59:14.618349  592993 main.go:141] libmachine: (test-preload-585145) DBG | domain test-preload-585145 has defined IP address 192.168.39.201 and MAC address 52:54:00:93:bb:61 in network mk-test-preload-585145
	I0127 13:59:14.618504  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHPort
	I0127 13:59:14.618665  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHKeyPath
	I0127 13:59:14.618815  592993 main.go:141] libmachine: (test-preload-585145) Calling .GetSSHUsername
	I0127 13:59:14.618980  592993 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/test-preload-585145/id_rsa Username:docker}
	I0127 13:59:14.699493  592993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:59:14.716115  592993 node_ready.go:35] waiting up to 6m0s for node "test-preload-585145" to be "Ready" ...
	I0127 13:59:14.813954  592993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:59:14.838386  592993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:59:15.800105  592993 main.go:141] libmachine: Making call to close driver server
	I0127 13:59:15.800142  592993 main.go:141] libmachine: (test-preload-585145) Calling .Close
	I0127 13:59:15.800187  592993 main.go:141] libmachine: Making call to close driver server
	I0127 13:59:15.800216  592993 main.go:141] libmachine: (test-preload-585145) Calling .Close
	I0127 13:59:15.800456  592993 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:59:15.800476  592993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:59:15.800487  592993 main.go:141] libmachine: Making call to close driver server
	I0127 13:59:15.800511  592993 main.go:141] libmachine: (test-preload-585145) Calling .Close
	I0127 13:59:15.800543  592993 main.go:141] libmachine: (test-preload-585145) DBG | Closing plugin on server side
	I0127 13:59:15.800488  592993 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:59:15.800572  592993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:59:15.800580  592993 main.go:141] libmachine: Making call to close driver server
	I0127 13:59:15.800588  592993 main.go:141] libmachine: (test-preload-585145) Calling .Close
	I0127 13:59:15.800763  592993 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:59:15.800778  592993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:59:15.800862  592993 main.go:141] libmachine: (test-preload-585145) DBG | Closing plugin on server side
	I0127 13:59:15.800871  592993 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:59:15.800884  592993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:59:15.806016  592993 main.go:141] libmachine: Making call to close driver server
	I0127 13:59:15.806031  592993 main.go:141] libmachine: (test-preload-585145) Calling .Close
	I0127 13:59:15.806223  592993 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:59:15.806234  592993 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:59:15.807774  592993 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 13:59:15.808917  592993 addons.go:514] duration metric: took 1.267883742s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 13:59:16.718925  592993 node_ready.go:53] node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:18.719615  592993 node_ready.go:53] node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:20.720117  592993 node_ready.go:53] node "test-preload-585145" has status "Ready":"False"
	I0127 13:59:22.220476  592993 node_ready.go:49] node "test-preload-585145" has status "Ready":"True"
	I0127 13:59:22.220503  592993 node_ready.go:38] duration metric: took 7.504358248s for node "test-preload-585145" to be "Ready" ...
	I0127 13:59:22.220517  592993 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:59:22.225643  592993 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:22.229600  592993 pod_ready.go:93] pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:22.229619  592993 pod_ready.go:82] duration metric: took 3.952285ms for pod "coredns-6d4b75cb6d-g886z" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:22.229630  592993 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:22.233359  592993 pod_ready.go:93] pod "etcd-test-preload-585145" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:22.233377  592993 pod_ready.go:82] duration metric: took 3.740195ms for pod "etcd-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:22.233384  592993 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:24.239265  592993 pod_ready.go:103] pod "kube-apiserver-test-preload-585145" in "kube-system" namespace has status "Ready":"False"
	I0127 13:59:26.240283  592993 pod_ready.go:103] pod "kube-apiserver-test-preload-585145" in "kube-system" namespace has status "Ready":"False"
	I0127 13:59:27.738833  592993 pod_ready.go:93] pod "kube-apiserver-test-preload-585145" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:27.738861  592993 pod_ready.go:82] duration metric: took 5.505470623s for pod "kube-apiserver-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.738872  592993 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.742390  592993 pod_ready.go:93] pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:27.742411  592993 pod_ready.go:82] duration metric: took 3.532297ms for pod "kube-controller-manager-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.742423  592993 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-prcb4" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.746536  592993 pod_ready.go:93] pod "kube-proxy-prcb4" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:27.746558  592993 pod_ready.go:82] duration metric: took 4.126767ms for pod "kube-proxy-prcb4" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.746571  592993 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.750564  592993 pod_ready.go:93] pod "kube-scheduler-test-preload-585145" in "kube-system" namespace has status "Ready":"True"
	I0127 13:59:27.750583  592993 pod_ready.go:82] duration metric: took 4.003068ms for pod "kube-scheduler-test-preload-585145" in "kube-system" namespace to be "Ready" ...
	I0127 13:59:27.750594  592993 pod_ready.go:39] duration metric: took 5.530064732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:59:27.750611  592993 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:59:27.750670  592993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:59:27.765212  592993 api_server.go:72] duration metric: took 13.224179058s to wait for apiserver process to appear ...
	I0127 13:59:27.765228  592993 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:59:27.765258  592993 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0127 13:59:27.769796  592993 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I0127 13:59:27.770474  592993 api_server.go:141] control plane version: v1.24.4
	I0127 13:59:27.770493  592993 api_server.go:131] duration metric: took 5.257522ms to wait for apiserver health ...
	I0127 13:59:27.770502  592993 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:59:27.777872  592993 system_pods.go:59] 7 kube-system pods found
	I0127 13:59:27.777896  592993 system_pods.go:61] "coredns-6d4b75cb6d-g886z" [85c7b552-3cd2-4c11-ad8e-899054f17522] Running
	I0127 13:59:27.777902  592993 system_pods.go:61] "etcd-test-preload-585145" [e8e2a198-9ba7-4957-8997-1897bd9b7518] Running
	I0127 13:59:27.777907  592993 system_pods.go:61] "kube-apiserver-test-preload-585145" [2b61e68d-7bc2-4c87-b524-741254ded875] Running
	I0127 13:59:27.777913  592993 system_pods.go:61] "kube-controller-manager-test-preload-585145" [2e18b28f-4565-4156-bb2b-1a784f95112d] Running
	I0127 13:59:27.777917  592993 system_pods.go:61] "kube-proxy-prcb4" [99dacbcb-c4cf-4986-aa67-8881895df306] Running
	I0127 13:59:27.777922  592993 system_pods.go:61] "kube-scheduler-test-preload-585145" [105eb0a5-4ae2-4690-9c8a-604c9b3b877c] Running
	I0127 13:59:27.777927  592993 system_pods.go:61] "storage-provisioner" [47f79f66-a24a-47d3-8f2b-2790f1d92ab4] Running
	I0127 13:59:27.777935  592993 system_pods.go:74] duration metric: took 7.425718ms to wait for pod list to return data ...
	I0127 13:59:27.777946  592993 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:59:27.819413  592993 default_sa.go:45] found service account: "default"
	I0127 13:59:27.819435  592993 default_sa.go:55] duration metric: took 41.481941ms for default service account to be created ...
	I0127 13:59:27.819445  592993 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:59:28.021673  592993 system_pods.go:87] 7 kube-system pods found
	I0127 13:59:28.220649  592993 system_pods.go:105] "coredns-6d4b75cb6d-g886z" [85c7b552-3cd2-4c11-ad8e-899054f17522] Running
	I0127 13:59:28.220675  592993 system_pods.go:105] "etcd-test-preload-585145" [e8e2a198-9ba7-4957-8997-1897bd9b7518] Running
	I0127 13:59:28.220683  592993 system_pods.go:105] "kube-apiserver-test-preload-585145" [2b61e68d-7bc2-4c87-b524-741254ded875] Running
	I0127 13:59:28.220690  592993 system_pods.go:105] "kube-controller-manager-test-preload-585145" [2e18b28f-4565-4156-bb2b-1a784f95112d] Running
	I0127 13:59:28.220698  592993 system_pods.go:105] "kube-proxy-prcb4" [99dacbcb-c4cf-4986-aa67-8881895df306] Running
	I0127 13:59:28.220706  592993 system_pods.go:105] "kube-scheduler-test-preload-585145" [105eb0a5-4ae2-4690-9c8a-604c9b3b877c] Running
	I0127 13:59:28.220715  592993 system_pods.go:105] "storage-provisioner" [47f79f66-a24a-47d3-8f2b-2790f1d92ab4] Running
	I0127 13:59:28.220736  592993 system_pods.go:147] duration metric: took 401.27455ms to wait for k8s-apps to be running ...
	I0127 13:59:28.220752  592993 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 13:59:28.220818  592993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:59:28.235467  592993 system_svc.go:56] duration metric: took 14.708752ms WaitForService to wait for kubelet
	I0127 13:59:28.235495  592993 kubeadm.go:582] duration metric: took 13.694462654s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:59:28.235521  592993 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:59:28.419631  592993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:59:28.419653  592993 node_conditions.go:123] node cpu capacity is 2
	I0127 13:59:28.419664  592993 node_conditions.go:105] duration metric: took 184.134557ms to run NodePressure ...
	I0127 13:59:28.419676  592993 start.go:241] waiting for startup goroutines ...
	I0127 13:59:28.419683  592993 start.go:246] waiting for cluster config update ...
	I0127 13:59:28.419696  592993 start.go:255] writing updated cluster config ...
	I0127 13:59:28.419920  592993 ssh_runner.go:195] Run: rm -f paused
	I0127 13:59:28.467304  592993 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 13:59:28.468968  592993 out.go:201] 
	W0127 13:59:28.470103  592993 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 13:59:28.471154  592993 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 13:59:28.472215  592993 out.go:177] * Done! kubectl is now configured to use "test-preload-585145" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.374400813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adddf783-fe8d-4047-8948-b3613624a58b name=/runtime.v1.RuntimeService/Version
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.375312476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a3440f9-934c-4eac-b1e5-b2711cc7f35c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.375706695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986369375690892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a3440f9-934c-4eac-b1e5-b2711cc7f35c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.376230335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dd5b5c7-8b86-41ce-af9a-93f487dcb0be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.376289902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dd5b5c7-8b86-41ce-af9a-93f487dcb0be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.376444028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3546e8b838d488714a448e476bf7a5df3703d629b40944aaae80dc693b107f,PodSandboxId:b3512e6d6b499d4efa20cda3efcc6d56093675d09289d98fa714e4866a4f9e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737986360328270215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-g886z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c7b552-3cd2-4c11-ad8e-899054f17522,},Annotations:map[string]string{io.kubernetes.container.hash: 74979410,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122728b8b5c6d2e68341247569807a775bdff70779238e3a7ac2e915dbfcf22d,PodSandboxId:7daa1bd6e5ca291b8f0404bcbfeb4bcb9391eee240d9ccef26aa45b4ecb8bdb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737986353011953234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-prcb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 99dacbcb-c4cf-4986-aa67-8881895df306,},Annotations:map[string]string{io.kubernetes.container.hash: 1580e78d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f250fd53a9cc4e68ae4f587fb64c8b71e8eceec8122276c5f693faa7af91011,PodSandboxId:94e8dcf752fa26e37bbfec4dbab5b1b8c10a09c60922b173c9af27ca3c893b3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986352956913293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
f79f66-a24a-47d3-8f2b-2790f1d92ab4,},Annotations:map[string]string{io.kubernetes.container.hash: e647b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668537458da3cb4554c2ed0af0a9ac8f167bd336729f9fb0d8fd499d6ca73934,PodSandboxId:0379628b45ffc8433b6a3788c2d55ee31a4624c4efd8fb21e20089403e1b7148,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737986347979185987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f73782b
bee5ed031d2d4b433885125,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea55d24cd038e82bed67fc06147dfd2848a5b5d8938ed6c4c89e7243bd98fab,PodSandboxId:ff8cfc711ef13566f4df5416517abf7dd2ca275f70b689df63c5c0fae50e57ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737986347970091977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1143e6acb1fd7f795a08
099418e2ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: d0277943,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1df96c6f2f3318fa66ea801213abe4713a19f0ef4346a4a4f88f33a3a1b9744,PodSandboxId:871ba3c5dca37d4683a2cbd20c298725e008eab8822e871308db86bf7acd3ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737986347921996569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2a
ead0d0ff95f3141200355ee992fc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190e559fe440c1efc99e5ccd506548f1a971b547f2919fc5a7a1bf3df6793691,PodSandboxId:65f41ee97bec3202c5d5212c9ca76a2081822c34e62b06d38f76e26f12937cbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737986347843013288,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d4ba71c80aa541adcab4317885abe2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 885545d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dd5b5c7-8b86-41ce-af9a-93f487dcb0be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.384704289Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f52e72a5-39a8-4cd2-9cd7-3f682f6c3dc5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.384861765Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b3512e6d6b499d4efa20cda3efcc6d56093675d09289d98fa714e4866a4f9e52,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-g886z,Uid:85c7b552-3cd2-4c11-ad8e-899054f17522,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986360097051307,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-g886z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c7b552-3cd2-4c11-ad8e-899054f17522,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T13:59:12.182684144Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7daa1bd6e5ca291b8f0404bcbfeb4bcb9391eee240d9ccef26aa45b4ecb8bdb8,Metadata:&PodSandboxMetadata{Name:kube-proxy-prcb4,Uid:99dacbcb-c4cf-4986-aa67-8881895df306,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1737986352800825708,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-prcb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99dacbcb-c4cf-4986-aa67-8881895df306,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T13:59:12.182703586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94e8dcf752fa26e37bbfec4dbab5b1b8c10a09c60922b173c9af27ca3c893b3c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:47f79f66-a24a-47d3-8f2b-2790f1d92ab4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986352799365968,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47f79f66-a24a-47d3-8f2b-2790
f1d92ab4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T13:59:12.182705676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0379628b45ffc8433b6a3788c2d55ee31a4624c4efd8fb21e20089403e1b7148,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-585145,Uid:23f7378
2bbee5ed031d2d4b433885125,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986347741467883,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f73782bbee5ed031d2d4b433885125,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23f73782bbee5ed031d2d4b433885125,kubernetes.io/config.seen: 2025-01-27T13:59:07.179833116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:871ba3c5dca37d4683a2cbd20c298725e008eab8822e871308db86bf7acd3ea6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-585145,Uid:7e2aead0d0ff95f3141200355ee992fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986347724082992,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-585145,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2aead0d0ff95f3141200355ee992fc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7e2aead0d0ff95f3141200355ee992fc,kubernetes.io/config.seen: 2025-01-27T13:59:07.179773092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff8cfc711ef13566f4df5416517abf7dd2ca275f70b689df63c5c0fae50e57ad,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-585145,Uid:1143e6acb1fd7f795a08099418e2ebd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986347719570382,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1143e6acb1fd7f795a08099418e2ebd2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.201:8443,kubernetes.io/config.hash: 1143e6acb1fd7f795a08099418e2ebd2,kub
ernetes.io/config.seen: 2025-01-27T13:59:07.179751296Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65f41ee97bec3202c5d5212c9ca76a2081822c34e62b06d38f76e26f12937cbb,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-585145,Uid:a1d4ba71c80aa541adcab4317885abe2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737986347708875324,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d4ba71c80aa541adcab4317885abe2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.201:2379,kubernetes.io/config.hash: a1d4ba71c80aa541adcab4317885abe2,kubernetes.io/config.seen: 2025-01-27T13:59:07.244732138Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f52e72a5-39a8-4cd2-9cd7-3f682f6c3dc5 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.385434709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4f749f1-a389-4c9c-83bf-69934bbd09cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.385513263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4f749f1-a389-4c9c-83bf-69934bbd09cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.385645165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3546e8b838d488714a448e476bf7a5df3703d629b40944aaae80dc693b107f,PodSandboxId:b3512e6d6b499d4efa20cda3efcc6d56093675d09289d98fa714e4866a4f9e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737986360328270215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-g886z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c7b552-3cd2-4c11-ad8e-899054f17522,},Annotations:map[string]string{io.kubernetes.container.hash: 74979410,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122728b8b5c6d2e68341247569807a775bdff70779238e3a7ac2e915dbfcf22d,PodSandboxId:7daa1bd6e5ca291b8f0404bcbfeb4bcb9391eee240d9ccef26aa45b4ecb8bdb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737986353011953234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-prcb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 99dacbcb-c4cf-4986-aa67-8881895df306,},Annotations:map[string]string{io.kubernetes.container.hash: 1580e78d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f250fd53a9cc4e68ae4f587fb64c8b71e8eceec8122276c5f693faa7af91011,PodSandboxId:94e8dcf752fa26e37bbfec4dbab5b1b8c10a09c60922b173c9af27ca3c893b3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986352956913293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
f79f66-a24a-47d3-8f2b-2790f1d92ab4,},Annotations:map[string]string{io.kubernetes.container.hash: e647b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668537458da3cb4554c2ed0af0a9ac8f167bd336729f9fb0d8fd499d6ca73934,PodSandboxId:0379628b45ffc8433b6a3788c2d55ee31a4624c4efd8fb21e20089403e1b7148,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737986347979185987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f73782b
bee5ed031d2d4b433885125,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea55d24cd038e82bed67fc06147dfd2848a5b5d8938ed6c4c89e7243bd98fab,PodSandboxId:ff8cfc711ef13566f4df5416517abf7dd2ca275f70b689df63c5c0fae50e57ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737986347970091977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1143e6acb1fd7f795a08
099418e2ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: d0277943,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1df96c6f2f3318fa66ea801213abe4713a19f0ef4346a4a4f88f33a3a1b9744,PodSandboxId:871ba3c5dca37d4683a2cbd20c298725e008eab8822e871308db86bf7acd3ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737986347921996569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2a
ead0d0ff95f3141200355ee992fc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190e559fe440c1efc99e5ccd506548f1a971b547f2919fc5a7a1bf3df6793691,PodSandboxId:65f41ee97bec3202c5d5212c9ca76a2081822c34e62b06d38f76e26f12937cbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737986347843013288,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d4ba71c80aa541adcab4317885abe2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 885545d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4f749f1-a389-4c9c-83bf-69934bbd09cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.410333912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=794bb8f7-d0ed-456a-be55-9b579a14ca8c name=/runtime.v1.RuntimeService/Version
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.410388339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=794bb8f7-d0ed-456a-be55-9b579a14ca8c name=/runtime.v1.RuntimeService/Version
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.411233959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0604e9f-cf2a-4538-a6f2-68d243d8b39f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.411620066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986369411604618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0604e9f-cf2a-4538-a6f2-68d243d8b39f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.412045494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b089be6-1d13-47b0-90c6-6414b7dd5e00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.412085531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b089be6-1d13-47b0-90c6-6414b7dd5e00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.412439079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3546e8b838d488714a448e476bf7a5df3703d629b40944aaae80dc693b107f,PodSandboxId:b3512e6d6b499d4efa20cda3efcc6d56093675d09289d98fa714e4866a4f9e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737986360328270215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-g886z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c7b552-3cd2-4c11-ad8e-899054f17522,},Annotations:map[string]string{io.kubernetes.container.hash: 74979410,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122728b8b5c6d2e68341247569807a775bdff70779238e3a7ac2e915dbfcf22d,PodSandboxId:7daa1bd6e5ca291b8f0404bcbfeb4bcb9391eee240d9ccef26aa45b4ecb8bdb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737986353011953234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-prcb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 99dacbcb-c4cf-4986-aa67-8881895df306,},Annotations:map[string]string{io.kubernetes.container.hash: 1580e78d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f250fd53a9cc4e68ae4f587fb64c8b71e8eceec8122276c5f693faa7af91011,PodSandboxId:94e8dcf752fa26e37bbfec4dbab5b1b8c10a09c60922b173c9af27ca3c893b3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986352956913293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
f79f66-a24a-47d3-8f2b-2790f1d92ab4,},Annotations:map[string]string{io.kubernetes.container.hash: e647b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668537458da3cb4554c2ed0af0a9ac8f167bd336729f9fb0d8fd499d6ca73934,PodSandboxId:0379628b45ffc8433b6a3788c2d55ee31a4624c4efd8fb21e20089403e1b7148,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737986347979185987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f73782b
bee5ed031d2d4b433885125,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea55d24cd038e82bed67fc06147dfd2848a5b5d8938ed6c4c89e7243bd98fab,PodSandboxId:ff8cfc711ef13566f4df5416517abf7dd2ca275f70b689df63c5c0fae50e57ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737986347970091977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1143e6acb1fd7f795a08
099418e2ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: d0277943,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1df96c6f2f3318fa66ea801213abe4713a19f0ef4346a4a4f88f33a3a1b9744,PodSandboxId:871ba3c5dca37d4683a2cbd20c298725e008eab8822e871308db86bf7acd3ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737986347921996569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2a
ead0d0ff95f3141200355ee992fc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190e559fe440c1efc99e5ccd506548f1a971b547f2919fc5a7a1bf3df6793691,PodSandboxId:65f41ee97bec3202c5d5212c9ca76a2081822c34e62b06d38f76e26f12937cbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737986347843013288,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d4ba71c80aa541adcab4317885abe2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 885545d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b089be6-1d13-47b0-90c6-6414b7dd5e00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.441372525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2923379b-63be-47a4-ad28-5faad3f6885e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.441441355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2923379b-63be-47a4-ad28-5faad3f6885e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.442099425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1efb1ff-fc9d-4bba-b603-13fddd94df0b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.442502877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986369442487584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1efb1ff-fc9d-4bba-b603-13fddd94df0b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.442925279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d54ec8b-2471-4da3-afff-32da0b74670f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.442964386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d54ec8b-2471-4da3-afff-32da0b74670f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:59:29 test-preload-585145 crio[667]: time="2025-01-27 13:59:29.443094337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e3546e8b838d488714a448e476bf7a5df3703d629b40944aaae80dc693b107f,PodSandboxId:b3512e6d6b499d4efa20cda3efcc6d56093675d09289d98fa714e4866a4f9e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737986360328270215,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-g886z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c7b552-3cd2-4c11-ad8e-899054f17522,},Annotations:map[string]string{io.kubernetes.container.hash: 74979410,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122728b8b5c6d2e68341247569807a775bdff70779238e3a7ac2e915dbfcf22d,PodSandboxId:7daa1bd6e5ca291b8f0404bcbfeb4bcb9391eee240d9ccef26aa45b4ecb8bdb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737986353011953234,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-prcb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 99dacbcb-c4cf-4986-aa67-8881895df306,},Annotations:map[string]string{io.kubernetes.container.hash: 1580e78d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f250fd53a9cc4e68ae4f587fb64c8b71e8eceec8122276c5f693faa7af91011,PodSandboxId:94e8dcf752fa26e37bbfec4dbab5b1b8c10a09c60922b173c9af27ca3c893b3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986352956913293,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47
f79f66-a24a-47d3-8f2b-2790f1d92ab4,},Annotations:map[string]string{io.kubernetes.container.hash: e647b1e9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:668537458da3cb4554c2ed0af0a9ac8f167bd336729f9fb0d8fd499d6ca73934,PodSandboxId:0379628b45ffc8433b6a3788c2d55ee31a4624c4efd8fb21e20089403e1b7148,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737986347979185987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f73782b
bee5ed031d2d4b433885125,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea55d24cd038e82bed67fc06147dfd2848a5b5d8938ed6c4c89e7243bd98fab,PodSandboxId:ff8cfc711ef13566f4df5416517abf7dd2ca275f70b689df63c5c0fae50e57ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737986347970091977,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1143e6acb1fd7f795a08
099418e2ebd2,},Annotations:map[string]string{io.kubernetes.container.hash: d0277943,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1df96c6f2f3318fa66ea801213abe4713a19f0ef4346a4a4f88f33a3a1b9744,PodSandboxId:871ba3c5dca37d4683a2cbd20c298725e008eab8822e871308db86bf7acd3ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737986347921996569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e2a
ead0d0ff95f3141200355ee992fc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:190e559fe440c1efc99e5ccd506548f1a971b547f2919fc5a7a1bf3df6793691,PodSandboxId:65f41ee97bec3202c5d5212c9ca76a2081822c34e62b06d38f76e26f12937cbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737986347843013288,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-585145,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1d4ba71c80aa541adcab4317885abe2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 885545d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d54ec8b-2471-4da3-afff-32da0b74670f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e3546e8b838d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   b3512e6d6b499       coredns-6d4b75cb6d-g886z
	122728b8b5c6d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   7daa1bd6e5ca2       kube-proxy-prcb4
	1f250fd53a9cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   94e8dcf752fa2       storage-provisioner
	668537458da3c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   0379628b45ffc       kube-scheduler-test-preload-585145
	eea55d24cd038       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   ff8cfc711ef13       kube-apiserver-test-preload-585145
	e1df96c6f2f33       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   871ba3c5dca37       kube-controller-manager-test-preload-585145
	190e559fe440c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   65f41ee97bec3       etcd-test-preload-585145
	
	
	==> coredns [0e3546e8b838d488714a448e476bf7a5df3703d629b40944aaae80dc693b107f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37139 - 30595 "HINFO IN 3668346617141746904.8212625854050556241. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016881846s
	
	
	==> describe nodes <==
	Name:               test-preload-585145
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-585145
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=test-preload-585145
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_57_58_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:57:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-585145
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:59:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:59:21 +0000   Mon, 27 Jan 2025 13:57:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:59:21 +0000   Mon, 27 Jan 2025 13:57:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:59:21 +0000   Mon, 27 Jan 2025 13:57:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:59:21 +0000   Mon, 27 Jan 2025 13:59:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    test-preload-585145
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4327e4b5669a49a6b5d518ed6fd0dba3
	  System UUID:                4327e4b5-669a-49a6-b5d5-18ed6fd0dba3
	  Boot ID:                    1db78056-31a5-4b23-896e-e281ac2ac265
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-g886z                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     78s
	  kube-system                 etcd-test-preload-585145                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         91s
	  kube-system                 kube-apiserver-test-preload-585145             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-585145    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-prcb4                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-test-preload-585145             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x4 over 99s)  kubelet          Node test-preload-585145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x4 over 99s)  kubelet          Node test-preload-585145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x4 over 99s)  kubelet          Node test-preload-585145 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-585145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-585145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-585145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                81s                kubelet          Node test-preload-585145 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node test-preload-585145 event: Registered Node test-preload-585145 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-585145 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-585145 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-585145 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-585145 event: Registered Node test-preload-585145 in Controller
	
	
	==> dmesg <==
	[Jan27 13:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051364] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039924] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884745] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.696603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.622771] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.889009] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.064693] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055300] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.155792] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.132718] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.290797] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Jan27 13:59] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.056000] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.708100] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +5.839620] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.778198] systemd-fstab-generator[1772]: Ignoring "noauto" option for root device
	[  +5.513250] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [190e559fe440c1efc99e5ccd506548f1a971b547f2919fc5a7a1bf3df6793691] <==
	{"level":"info","ts":"2025-01-27T13:59:08.190Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7315e47f21b89457","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T13:59:08.191Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T13:59:08.201Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:59:08.201Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7315e47f21b89457","initial-advertise-peer-urls":["https://192.168.39.201:2380"],"listen-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.201:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:59:08.201Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 switched to configuration voters=(8292785523550360663)"}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","added-peer-id":"7315e47f21b89457","added-peer-peer-urls":["https://192.168.39.201:2380"]}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:59:08.202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgPreVoteResp from 7315e47f21b89457 at term 2"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgVoteResp from 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7315e47f21b89457 elected leader 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2025-01-27T13:59:09.254Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7315e47f21b89457","local-member-attributes":"{Name:test-preload-585145 ClientURLs:[https://192.168.39.201:2379]}","request-path":"/0/members/7315e47f21b89457/attributes","cluster-id":"1777413e1d1fef45","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:59:09.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:59:09.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.201:2379"}
	{"level":"info","ts":"2025-01-27T13:59:09.257Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:59:09.258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:59:09.258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T13:59:09.262Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:59:29 up 0 min,  0 users,  load average: 0.73, 0.22, 0.08
	Linux test-preload-585145 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [eea55d24cd038e82bed67fc06147dfd2848a5b5d8938ed6c4c89e7243bd98fab] <==
	I0127 13:59:11.709407       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 13:59:11.709428       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 13:59:11.709479       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 13:59:11.716673       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0127 13:59:11.716718       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0127 13:59:11.718314       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0127 13:59:11.801487       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0127 13:59:11.809876       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 13:59:11.817232       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 13:59:11.877554       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 13:59:11.877941       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 13:59:11.877955       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 13:59:11.881461       1 cache.go:39] Caches are synced for autoregister controller
	I0127 13:59:11.881733       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 13:59:11.887676       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 13:59:12.346638       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 13:59:12.680496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 13:59:13.253009       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 13:59:13.461202       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 13:59:13.478925       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 13:59:13.520584       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 13:59:13.533553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 13:59:13.538637       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 13:59:24.353093       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 13:59:24.404905       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e1df96c6f2f3318fa66ea801213abe4713a19f0ef4346a4a4f88f33a3a1b9744] <==
	I0127 13:59:24.205964       1 shared_informer.go:262] Caches are synced for HPA
	I0127 13:59:24.207506       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0127 13:59:24.209319       1 shared_informer.go:262] Caches are synced for GC
	I0127 13:59:24.211546       1 shared_informer.go:262] Caches are synced for node
	I0127 13:59:24.211580       1 range_allocator.go:173] Starting range CIDR allocator
	I0127 13:59:24.211584       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0127 13:59:24.211590       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0127 13:59:24.213783       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0127 13:59:24.217637       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 13:59:24.220208       1 shared_informer.go:262] Caches are synced for persistent volume
	I0127 13:59:24.225756       1 shared_informer.go:262] Caches are synced for ephemeral
	I0127 13:59:24.285113       1 shared_informer.go:262] Caches are synced for taint
	I0127 13:59:24.285291       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0127 13:59:24.285389       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0127 13:59:24.285401       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-585145. Assuming now as a timestamp.
	I0127 13:59:24.285466       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0127 13:59:24.285914       1 event.go:294] "Event occurred" object="test-preload-585145" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-585145 event: Registered Node test-preload-585145 in Controller"
	I0127 13:59:24.382206       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0127 13:59:24.386299       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0127 13:59:24.407025       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 13:59:24.423934       1 shared_informer.go:262] Caches are synced for crt configmap
	I0127 13:59:24.447174       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 13:59:24.834833       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 13:59:24.853115       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 13:59:24.853126       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [122728b8b5c6d2e68341247569807a775bdff70779238e3a7ac2e915dbfcf22d] <==
	I0127 13:59:13.217630       1 node.go:163] Successfully retrieved node IP: 192.168.39.201
	I0127 13:59:13.217696       1 server_others.go:138] "Detected node IP" address="192.168.39.201"
	I0127 13:59:13.217785       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 13:59:13.246406       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 13:59:13.246437       1 server_others.go:206] "Using iptables Proxier"
	I0127 13:59:13.246642       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 13:59:13.247045       1 server.go:661] "Version info" version="v1.24.4"
	I0127 13:59:13.247073       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:59:13.248468       1 config.go:317] "Starting service config controller"
	I0127 13:59:13.248651       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 13:59:13.248693       1 config.go:226] "Starting endpoint slice config controller"
	I0127 13:59:13.248698       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 13:59:13.250475       1 config.go:444] "Starting node config controller"
	I0127 13:59:13.250518       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 13:59:13.349936       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0127 13:59:13.349975       1 shared_informer.go:262] Caches are synced for service config
	I0127 13:59:13.351362       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [668537458da3cb4554c2ed0af0a9ac8f167bd336729f9fb0d8fd499d6ca73934] <==
	I0127 13:59:09.098458       1 serving.go:348] Generated self-signed cert in-memory
	W0127 13:59:11.763282       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 13:59:11.763406       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 13:59:11.763507       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:59:11.763538       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:59:11.810378       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 13:59:11.810436       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:59:11.816469       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 13:59:11.816524       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 13:59:11.816492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:59:11.816868       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:59:11.917417       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.179454    1124 apiserver.go:52] "Watching apiserver"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.182874    1124 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.183058    1124 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.183198    1124 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.184349    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-g886z" podUID=85c7b552-3cd2-4c11-ad8e-899054f17522
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233014    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjwd\" (UniqueName: \"kubernetes.io/projected/85c7b552-3cd2-4c11-ad8e-899054f17522-kube-api-access-hjjwd\") pod \"coredns-6d4b75cb6d-g886z\" (UID: \"85c7b552-3cd2-4c11-ad8e-899054f17522\") " pod="kube-system/coredns-6d4b75cb6d-g886z"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233169    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t82z\" (UniqueName: \"kubernetes.io/projected/47f79f66-a24a-47d3-8f2b-2790f1d92ab4-kube-api-access-7t82z\") pod \"storage-provisioner\" (UID: \"47f79f66-a24a-47d3-8f2b-2790f1d92ab4\") " pod="kube-system/storage-provisioner"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233201    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99dacbcb-c4cf-4986-aa67-8881895df306-lib-modules\") pod \"kube-proxy-prcb4\" (UID: \"99dacbcb-c4cf-4986-aa67-8881895df306\") " pod="kube-system/kube-proxy-prcb4"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233221    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99dacbcb-c4cf-4986-aa67-8881895df306-kube-proxy\") pod \"kube-proxy-prcb4\" (UID: \"99dacbcb-c4cf-4986-aa67-8881895df306\") " pod="kube-system/kube-proxy-prcb4"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233237    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99dacbcb-c4cf-4986-aa67-8881895df306-xtables-lock\") pod \"kube-proxy-prcb4\" (UID: \"99dacbcb-c4cf-4986-aa67-8881895df306\") " pod="kube-system/kube-proxy-prcb4"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233254    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47f79f66-a24a-47d3-8f2b-2790f1d92ab4-tmp\") pod \"storage-provisioner\" (UID: \"47f79f66-a24a-47d3-8f2b-2790f1d92ab4\") " pod="kube-system/storage-provisioner"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233272    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zcw6\" (UniqueName: \"kubernetes.io/projected/99dacbcb-c4cf-4986-aa67-8881895df306-kube-api-access-8zcw6\") pod \"kube-proxy-prcb4\" (UID: \"99dacbcb-c4cf-4986-aa67-8881895df306\") " pod="kube-system/kube-proxy-prcb4"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233294    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume\") pod \"coredns-6d4b75cb6d-g886z\" (UID: \"85c7b552-3cd2-4c11-ad8e-899054f17522\") " pod="kube-system/coredns-6d4b75cb6d-g886z"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: I0127 13:59:12.233303    1124 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.233643    1124 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.337269    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.337387    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume podName:85c7b552-3cd2-4c11-ad8e-899054f17522 nodeName:}" failed. No retries permitted until 2025-01-27 13:59:12.837361149 +0000 UTC m=+5.777255387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume") pod "coredns-6d4b75cb6d-g886z" (UID: "85c7b552-3cd2-4c11-ad8e-899054f17522") : object "kube-system"/"coredns" not registered
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.840047    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:59:12 test-preload-585145 kubelet[1124]: E0127 13:59:12.840122    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume podName:85c7b552-3cd2-4c11-ad8e-899054f17522 nodeName:}" failed. No retries permitted until 2025-01-27 13:59:13.840099557 +0000 UTC m=+6.779993798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume") pod "coredns-6d4b75cb6d-g886z" (UID: "85c7b552-3cd2-4c11-ad8e-899054f17522") : object "kube-system"/"coredns" not registered
	Jan 27 13:59:13 test-preload-585145 kubelet[1124]: E0127 13:59:13.848972    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:59:13 test-preload-585145 kubelet[1124]: E0127 13:59:13.849027    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume podName:85c7b552-3cd2-4c11-ad8e-899054f17522 nodeName:}" failed. No retries permitted until 2025-01-27 13:59:15.849014426 +0000 UTC m=+8.788908676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume") pod "coredns-6d4b75cb6d-g886z" (UID: "85c7b552-3cd2-4c11-ad8e-899054f17522") : object "kube-system"/"coredns" not registered
	Jan 27 13:59:14 test-preload-585145 kubelet[1124]: E0127 13:59:14.285387    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-g886z" podUID=85c7b552-3cd2-4c11-ad8e-899054f17522
	Jan 27 13:59:15 test-preload-585145 kubelet[1124]: E0127 13:59:15.865505    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:59:15 test-preload-585145 kubelet[1124]: E0127 13:59:15.865632    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume podName:85c7b552-3cd2-4c11-ad8e-899054f17522 nodeName:}" failed. No retries permitted until 2025-01-27 13:59:19.865614977 +0000 UTC m=+12.805509214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/85c7b552-3cd2-4c11-ad8e-899054f17522-config-volume") pod "coredns-6d4b75cb6d-g886z" (UID: "85c7b552-3cd2-4c11-ad8e-899054f17522") : object "kube-system"/"coredns" not registered
	Jan 27 13:59:16 test-preload-585145 kubelet[1124]: E0127 13:59:16.286417    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-g886z" podUID=85c7b552-3cd2-4c11-ad8e-899054f17522
	
	
	==> storage-provisioner [1f250fd53a9cc4e68ae4f587fb64c8b71e8eceec8122276c5f693faa7af91011] <==
	I0127 13:59:13.050384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-585145 -n test-preload-585145
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-585145 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-585145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-585145
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-585145: (1.050059062s)
--- FAIL: TestPreload (163.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (1145.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.159520779s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-225004" primary control-plane node in "kubernetes-upgrade-225004" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:04:44.123093  597402 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:04:44.123351  597402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:04:44.123366  597402 out.go:358] Setting ErrFile to fd 2...
	I0127 14:04:44.123370  597402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:04:44.123566  597402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:04:44.124231  597402 out.go:352] Setting JSON to false
	I0127 14:04:44.125226  597402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17229,"bootTime":1737969455,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:04:44.125342  597402 start.go:139] virtualization: kvm guest
	I0127 14:04:44.127260  597402 out.go:177] * [kubernetes-upgrade-225004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:04:44.128505  597402 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:04:44.128507  597402 notify.go:220] Checking for updates...
	I0127 14:04:44.130931  597402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:04:44.132163  597402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:04:44.133415  597402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:04:44.134541  597402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:04:44.135674  597402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:04:44.137389  597402 config.go:182] Loaded profile config "NoKubernetes-412983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 14:04:44.137526  597402 config.go:182] Loaded profile config "cert-expiration-335486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:04:44.137680  597402 config.go:182] Loaded profile config "running-upgrade-435002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 14:04:44.137813  597402 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:04:44.174456  597402 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:04:44.175663  597402 start.go:297] selected driver: kvm2
	I0127 14:04:44.175678  597402 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:04:44.175688  597402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:04:44.176409  597402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:04:44.176478  597402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:04:44.193669  597402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:04:44.193725  597402 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:04:44.194095  597402 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 14:04:44.194152  597402 cni.go:84] Creating CNI manager for ""
	I0127 14:04:44.194223  597402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:04:44.194238  597402 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:04:44.194333  597402 start.go:340] cluster config:
	{Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:04:44.194495  597402 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:04:44.196787  597402 out.go:177] * Starting "kubernetes-upgrade-225004" primary control-plane node in "kubernetes-upgrade-225004" cluster
	I0127 14:04:44.197873  597402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:04:44.197917  597402 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 14:04:44.197931  597402 cache.go:56] Caching tarball of preloaded images
	I0127 14:04:44.198061  597402 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:04:44.198077  597402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 14:04:44.198198  597402 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/config.json ...
	I0127 14:04:44.198228  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/config.json: {Name:mk0fa0e667de263f8abae7ac66e16f5431c7d6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:04:44.198411  597402 start.go:360] acquireMachinesLock for kubernetes-upgrade-225004: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:04:45.477811  597402 start.go:364] duration metric: took 1.279355409s to acquireMachinesLock for "kubernetes-upgrade-225004"
	I0127 14:04:45.477868  597402 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:04:45.477967  597402 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:04:45.479796  597402 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 14:04:45.479982  597402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:04:45.480045  597402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:04:45.497029  597402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42961
	I0127 14:04:45.497420  597402 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:04:45.498116  597402 main.go:141] libmachine: Using API Version  1
	I0127 14:04:45.498143  597402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:04:45.498488  597402 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:04:45.498735  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:04:45.498905  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:04:45.499049  597402 start.go:159] libmachine.API.Create for "kubernetes-upgrade-225004" (driver="kvm2")
	I0127 14:04:45.499080  597402 client.go:168] LocalClient.Create starting
	I0127 14:04:45.499126  597402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:04:45.499163  597402 main.go:141] libmachine: Decoding PEM data...
	I0127 14:04:45.499188  597402 main.go:141] libmachine: Parsing certificate...
	I0127 14:04:45.499259  597402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:04:45.499284  597402 main.go:141] libmachine: Decoding PEM data...
	I0127 14:04:45.499303  597402 main.go:141] libmachine: Parsing certificate...
	I0127 14:04:45.499327  597402 main.go:141] libmachine: Running pre-create checks...
	I0127 14:04:45.499346  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .PreCreateCheck
	I0127 14:04:45.499712  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetConfigRaw
	I0127 14:04:45.500203  597402 main.go:141] libmachine: Creating machine...
	I0127 14:04:45.500225  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .Create
	I0127 14:04:45.500362  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) creating KVM machine...
	I0127 14:04:45.500378  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) creating network...
	I0127 14:04:45.501667  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found existing default KVM network
	I0127 14:04:45.504765  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.504570  597426 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 14:04:45.505708  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.505613  597426 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:38:3f} reservation:<nil>}
	I0127 14:04:45.506699  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.506630  597426 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:f6:4c} reservation:<nil>}
	I0127 14:04:45.507509  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.507443  597426 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:4a:e8:71} reservation:<nil>}
	I0127 14:04:45.508756  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.508694  597426 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00042c610}
	I0127 14:04:45.508823  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | created network xml: 
	I0127 14:04:45.508843  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | <network>
	I0127 14:04:45.508850  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   <name>mk-kubernetes-upgrade-225004</name>
	I0127 14:04:45.508867  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   <dns enable='no'/>
	I0127 14:04:45.508876  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   
	I0127 14:04:45.508891  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0127 14:04:45.508903  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |     <dhcp>
	I0127 14:04:45.508913  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0127 14:04:45.508926  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |     </dhcp>
	I0127 14:04:45.508933  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   </ip>
	I0127 14:04:45.508938  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG |   
	I0127 14:04:45.508942  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | </network>
	I0127 14:04:45.508949  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | 
	I0127 14:04:45.513613  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | trying to create private KVM network mk-kubernetes-upgrade-225004 192.168.83.0/24...
	I0127 14:04:45.592867  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | private KVM network mk-kubernetes-upgrade-225004 192.168.83.0/24 created
	I0127 14:04:45.592916  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004 ...
	I0127 14:04:45.592931  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.592815  597426 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:04:45.592949  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:04:45.592987  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:04:45.871151  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.871012  597426 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa...
	I0127 14:04:45.910638  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.910518  597426 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/kubernetes-upgrade-225004.rawdisk...
	I0127 14:04:45.910666  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | Writing magic tar header
	I0127 14:04:45.910685  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | Writing SSH key tar header
	I0127 14:04:45.910704  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:45.910652  597426 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004 ...
	I0127 14:04:45.910798  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004
	I0127 14:04:45.910824  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004 (perms=drwx------)
	I0127 14:04:45.910840  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:04:45.910854  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:04:45.910869  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:04:45.910884  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:04:45.910898  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:04:45.910911  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:04:45.910924  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:04:45.910947  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) creating domain...
	I0127 14:04:45.910981  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:04:45.910994  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:04:45.910999  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home/jenkins
	I0127 14:04:45.911005  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | checking permissions on dir: /home
	I0127 14:04:45.911010  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | skipping /home - not owner
	I0127 14:04:45.912026  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) define libvirt domain using xml: 
	I0127 14:04:45.912043  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) <domain type='kvm'>
	I0127 14:04:45.912049  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <name>kubernetes-upgrade-225004</name>
	I0127 14:04:45.912063  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <memory unit='MiB'>2200</memory>
	I0127 14:04:45.912076  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <vcpu>2</vcpu>
	I0127 14:04:45.912093  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <features>
	I0127 14:04:45.912102  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <acpi/>
	I0127 14:04:45.912113  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <apic/>
	I0127 14:04:45.912120  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <pae/>
	I0127 14:04:45.912126  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     
	I0127 14:04:45.912232  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   </features>
	I0127 14:04:45.912295  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <cpu mode='host-passthrough'>
	I0127 14:04:45.912317  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   
	I0127 14:04:45.912343  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   </cpu>
	I0127 14:04:45.912356  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <os>
	I0127 14:04:45.912366  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <type>hvm</type>
	I0127 14:04:45.912376  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <boot dev='cdrom'/>
	I0127 14:04:45.912386  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <boot dev='hd'/>
	I0127 14:04:45.912395  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <bootmenu enable='no'/>
	I0127 14:04:45.912414  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   </os>
	I0127 14:04:45.912427  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   <devices>
	I0127 14:04:45.912438  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <disk type='file' device='cdrom'>
	I0127 14:04:45.912453  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/boot2docker.iso'/>
	I0127 14:04:45.912464  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <target dev='hdc' bus='scsi'/>
	I0127 14:04:45.912473  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <readonly/>
	I0127 14:04:45.912487  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </disk>
	I0127 14:04:45.912497  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <disk type='file' device='disk'>
	I0127 14:04:45.912510  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:04:45.912527  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/kubernetes-upgrade-225004.rawdisk'/>
	I0127 14:04:45.912538  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <target dev='hda' bus='virtio'/>
	I0127 14:04:45.912550  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </disk>
	I0127 14:04:45.912562  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <interface type='network'>
	I0127 14:04:45.912573  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <source network='mk-kubernetes-upgrade-225004'/>
	I0127 14:04:45.912585  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <model type='virtio'/>
	I0127 14:04:45.912593  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </interface>
	I0127 14:04:45.912601  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <interface type='network'>
	I0127 14:04:45.912613  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <source network='default'/>
	I0127 14:04:45.912623  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <model type='virtio'/>
	I0127 14:04:45.912639  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </interface>
	I0127 14:04:45.912652  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <serial type='pty'>
	I0127 14:04:45.912671  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <target port='0'/>
	I0127 14:04:45.912681  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </serial>
	I0127 14:04:45.912687  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <console type='pty'>
	I0127 14:04:45.912692  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <target type='serial' port='0'/>
	I0127 14:04:45.912699  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </console>
	I0127 14:04:45.912704  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     <rng model='virtio'>
	I0127 14:04:45.912710  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)       <backend model='random'>/dev/random</backend>
	I0127 14:04:45.912715  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     </rng>
	I0127 14:04:45.912720  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     
	I0127 14:04:45.912728  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)     
	I0127 14:04:45.912733  597402 main.go:141] libmachine: (kubernetes-upgrade-225004)   </devices>
	I0127 14:04:45.912739  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) </domain>
	I0127 14:04:45.912746  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) 
	I0127 14:04:45.916533  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:2a:6b:2e in network default
	I0127 14:04:45.917133  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) starting domain...
	I0127 14:04:45.917153  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) ensuring networks are active...
	I0127 14:04:45.917170  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:45.917790  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Ensuring network default is active
	I0127 14:04:45.918065  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Ensuring network mk-kubernetes-upgrade-225004 is active
	I0127 14:04:45.918508  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) getting domain XML...
	I0127 14:04:45.919093  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) creating domain...
	I0127 14:04:46.271153  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) waiting for IP...
	I0127 14:04:46.272052  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.272647  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.272704  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:46.272653  597426 retry.go:31] will retry after 306.656326ms: waiting for domain to come up
	I0127 14:04:46.581334  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.581927  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.581950  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:46.581899  597426 retry.go:31] will retry after 309.248613ms: waiting for domain to come up
	I0127 14:04:46.892346  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.892836  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:46.892866  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:46.892801  597426 retry.go:31] will retry after 431.606916ms: waiting for domain to come up
	I0127 14:04:47.326566  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:47.327003  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:47.327033  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:47.326972  597426 retry.go:31] will retry after 389.173283ms: waiting for domain to come up
	I0127 14:04:47.717561  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:47.718100  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:47.718153  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:47.718046  597426 retry.go:31] will retry after 654.854976ms: waiting for domain to come up
	I0127 14:04:48.374930  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:48.375478  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:48.375508  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:48.375432  597426 retry.go:31] will retry after 950.953824ms: waiting for domain to come up
	I0127 14:04:49.327938  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:49.328384  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:49.328417  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:49.328343  597426 retry.go:31] will retry after 810.908321ms: waiting for domain to come up
	I0127 14:04:50.141403  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:50.141928  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:50.141975  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:50.141904  597426 retry.go:31] will retry after 1.095674873s: waiting for domain to come up
	I0127 14:04:51.239806  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:51.240400  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:51.240434  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:51.240338  597426 retry.go:31] will retry after 1.225885344s: waiting for domain to come up
	I0127 14:04:52.467491  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:52.468038  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:52.468077  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:52.467985  597426 retry.go:31] will retry after 1.650724192s: waiting for domain to come up
	I0127 14:04:54.121051  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:54.121562  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:54.121601  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:54.121546  597426 retry.go:31] will retry after 1.827913403s: waiting for domain to come up
	I0127 14:04:55.950823  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:55.951403  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:55.951466  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:55.951381  597426 retry.go:31] will retry after 2.647019208s: waiting for domain to come up
	I0127 14:04:58.601078  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:04:58.601521  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:04:58.601548  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:04:58.601474  597426 retry.go:31] will retry after 4.300458125s: waiting for domain to come up
	I0127 14:05:02.904634  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:02.905141  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find current IP address of domain kubernetes-upgrade-225004 in network mk-kubernetes-upgrade-225004
	I0127 14:05:02.905172  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | I0127 14:05:02.905107  597426 retry.go:31] will retry after 5.569328175s: waiting for domain to come up
	I0127 14:05:08.475585  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.476064  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) found domain IP: 192.168.83.145
	I0127 14:05:08.476100  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) reserving static IP address...
	I0127 14:05:08.476115  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has current primary IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.476486  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-225004", mac: "52:54:00:c2:c8:d3", ip: "192.168.83.145"} in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.551285  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | Getting to WaitForSSH function...
	I0127 14:05:08.551311  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) reserved static IP address 192.168.83.145 for domain kubernetes-upgrade-225004
	I0127 14:05:08.551323  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) waiting for SSH...
	I0127 14:05:08.554379  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.554915  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:08.554948  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.555103  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | Using SSH client type: external
	I0127 14:05:08.555136  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa (-rw-------)
	I0127 14:05:08.555171  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:05:08.555196  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | About to run SSH command:
	I0127 14:05:08.555228  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | exit 0
	I0127 14:05:08.691569  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | SSH cmd err, output: <nil>: 
	I0127 14:05:08.691844  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) KVM machine creation complete
	I0127 14:05:08.692233  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetConfigRaw
	I0127 14:05:08.692752  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:08.692955  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:08.693134  597402 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:05:08.693151  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetState
	I0127 14:05:08.694710  597402 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:05:08.694727  597402 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:05:08.694735  597402 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:05:08.694744  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:08.697526  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.698005  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:08.698040  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.698208  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:08.698394  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.698585  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.698716  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:08.698906  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:08.699144  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:08.699157  597402 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:05:08.825399  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:05:08.825437  597402 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:05:08.825453  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:08.829085  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.829591  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:08.829625  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.829859  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:08.830100  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.830289  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.830470  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:08.830794  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:08.831025  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:08.831043  597402 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:05:08.951352  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:05:08.951449  597402 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:05:08.951460  597402 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:05:08.951471  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:05:08.951723  597402 buildroot.go:166] provisioning hostname "kubernetes-upgrade-225004"
	I0127 14:05:08.951758  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:05:08.951939  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:08.955360  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.955797  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:08.955823  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:08.955956  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:08.956165  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.956344  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:08.956495  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:08.956699  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:08.956933  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:08.956954  597402 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225004 && echo "kubernetes-upgrade-225004" | sudo tee /etc/hostname
	I0127 14:05:09.088071  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225004
	
	I0127 14:05:09.088102  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:09.090337  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.090621  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.090652  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.090779  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:09.090977  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.091150  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.091255  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:09.091417  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:09.091578  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:09.091595  597402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225004' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225004/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225004' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:05:09.209589  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:05:09.209631  597402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:05:09.209661  597402 buildroot.go:174] setting up certificates
	I0127 14:05:09.209683  597402 provision.go:84] configureAuth start
	I0127 14:05:09.209701  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:05:09.209936  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:05:09.212181  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.212483  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.212505  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.212658  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:09.215038  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.215381  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.215430  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.215519  597402 provision.go:143] copyHostCerts
	I0127 14:05:09.215569  597402 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:05:09.215588  597402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:05:09.215637  597402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:05:09.215746  597402 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:05:09.215757  597402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:05:09.215778  597402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:05:09.215842  597402 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:05:09.215850  597402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:05:09.215866  597402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:05:09.215924  597402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225004 san=[127.0.0.1 192.168.83.145 kubernetes-upgrade-225004 localhost minikube]
	I0127 14:05:09.537417  597402 provision.go:177] copyRemoteCerts
	I0127 14:05:09.537480  597402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:05:09.537525  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:09.539798  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.540093  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.540123  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.540296  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:09.540485  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.540625  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:09.540776  597402 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:05:09.627606  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 14:05:09.655633  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:05:09.681092  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:05:09.704658  597402 provision.go:87] duration metric: took 494.961416ms to configureAuth
	I0127 14:05:09.704678  597402 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:05:09.704834  597402 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:05:09.704918  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:09.707279  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.707612  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.707641  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.707762  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:09.707972  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.708143  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.708256  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:09.708460  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:09.708673  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:09.708698  597402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:05:09.929470  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:05:09.929495  597402 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:05:09.929504  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetURL
	I0127 14:05:09.930877  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | using libvirt version 6000000
	I0127 14:05:09.933113  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.933448  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.933475  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.933669  597402 main.go:141] libmachine: Docker is up and running!
	I0127 14:05:09.933689  597402 main.go:141] libmachine: Reticulating splines...
	I0127 14:05:09.933697  597402 client.go:171] duration metric: took 24.434605809s to LocalClient.Create
	I0127 14:05:09.933724  597402 start.go:167] duration metric: took 24.434675948s to libmachine.API.Create "kubernetes-upgrade-225004"
	I0127 14:05:09.933758  597402 start.go:293] postStartSetup for "kubernetes-upgrade-225004" (driver="kvm2")
	I0127 14:05:09.933776  597402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:05:09.933794  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:09.934052  597402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:05:09.934082  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:09.936411  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.936750  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:09.936780  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:09.936890  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:09.937059  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:09.937227  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:09.937388  597402 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:05:10.025149  597402 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:05:10.029330  597402 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:05:10.029352  597402 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:05:10.029405  597402 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:05:10.029496  597402 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:05:10.029625  597402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:05:10.040663  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:05:10.065599  597402 start.go:296] duration metric: took 131.824267ms for postStartSetup
	I0127 14:05:10.065646  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetConfigRaw
	I0127 14:05:10.066226  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:05:10.068733  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.069038  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:10.069069  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.069243  597402 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/config.json ...
	I0127 14:05:10.069405  597402 start.go:128] duration metric: took 24.591423262s to createHost
	I0127 14:05:10.069426  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:10.071676  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.071987  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:10.072007  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.072236  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:10.072427  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:10.072575  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:10.072695  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:10.072862  597402 main.go:141] libmachine: Using SSH client type: native
	I0127 14:05:10.073014  597402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:05:10.073024  597402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:05:10.185651  597402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986710.154925926
	
	I0127 14:05:10.185670  597402 fix.go:216] guest clock: 1737986710.154925926
	I0127 14:05:10.185679  597402 fix.go:229] Guest: 2025-01-27 14:05:10.154925926 +0000 UTC Remote: 2025-01-27 14:05:10.069416637 +0000 UTC m=+25.984894706 (delta=85.509289ms)
	I0127 14:05:10.185711  597402 fix.go:200] guest clock delta is within tolerance: 85.509289ms
	I0127 14:05:10.185718  597402 start.go:83] releasing machines lock for "kubernetes-upgrade-225004", held for 24.70787906s
	I0127 14:05:10.185740  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:10.185919  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:05:10.188146  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.188448  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:10.188480  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.188583  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:10.189000  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:10.189184  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:05:10.189252  597402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:05:10.189306  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:10.189382  597402 ssh_runner.go:195] Run: cat /version.json
	I0127 14:05:10.189411  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:05:10.191933  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.192105  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.192339  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:10.192372  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.192408  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:10.192522  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:10.192550  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:10.192576  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:10.192783  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:10.192783  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:05:10.192915  597402 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:05:10.192971  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:05:10.193112  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:05:10.193221  597402 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:05:10.274992  597402 ssh_runner.go:195] Run: systemctl --version
	I0127 14:05:10.300652  597402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:05:10.462336  597402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:05:10.468884  597402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:05:10.468958  597402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:05:10.485945  597402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:05:10.485967  597402 start.go:495] detecting cgroup driver to use...
	I0127 14:05:10.486036  597402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:05:10.504167  597402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:05:10.518950  597402 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:05:10.519006  597402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:05:10.533379  597402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:05:10.550241  597402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:05:10.672406  597402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:05:10.814524  597402 docker.go:233] disabling docker service ...
	I0127 14:05:10.814614  597402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:05:10.829313  597402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:05:10.842908  597402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:05:10.983285  597402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:05:11.110130  597402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:05:11.124099  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:05:11.143191  597402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 14:05:11.143270  597402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:05:11.154731  597402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:05:11.154800  597402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:05:11.166200  597402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:05:11.176507  597402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:05:11.187574  597402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:05:11.200440  597402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:05:11.210013  597402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:05:11.210070  597402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:05:11.222654  597402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:05:11.232343  597402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:05:11.366769  597402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:05:11.459445  597402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:05:11.459531  597402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:05:11.464754  597402 start.go:563] Will wait 60s for crictl version
	I0127 14:05:11.464819  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:11.468928  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:05:11.514152  597402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:05:11.514230  597402 ssh_runner.go:195] Run: crio --version
	I0127 14:05:11.543609  597402 ssh_runner.go:195] Run: crio --version
	I0127 14:05:11.573529  597402 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 14:05:11.574748  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:05:11.577597  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:11.578011  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:00 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:05:11.578037  597402 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:05:11.578229  597402 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 14:05:11.582381  597402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:05:11.596363  597402 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225004 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:05:11.596504  597402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:05:11.596564  597402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:05:11.633430  597402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:05:11.633490  597402 ssh_runner.go:195] Run: which lz4
	I0127 14:05:11.637824  597402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:05:11.642389  597402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:05:11.642419  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 14:05:13.394158  597402 crio.go:462] duration metric: took 1.756364054s to copy over tarball
	I0127 14:05:13.394250  597402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:05:15.957914  597402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.563626525s)
	I0127 14:05:15.957950  597402 crio.go:469] duration metric: took 2.563760353s to extract the tarball
	I0127 14:05:15.957957  597402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:05:16.005613  597402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:05:16.053101  597402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:05:16.053137  597402 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 14:05:16.053186  597402 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:05:16.053235  597402 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.053257  597402 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.053260  597402 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.053285  597402 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.053235  597402 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.053300  597402 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 14:05:16.053247  597402 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.054958  597402 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.054996  597402 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.054996  597402 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.054984  597402 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 14:05:16.055054  597402 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.054998  597402 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.055224  597402 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:05:16.055357  597402 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.219267  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.223465  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.231585  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.244962  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.245183  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.253469  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.268433  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 14:05:16.293667  597402 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 14:05:16.293726  597402 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.293772  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.362012  597402 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 14:05:16.362067  597402 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.362119  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.397841  597402 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 14:05:16.397911  597402 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.397970  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.412462  597402 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 14:05:16.412491  597402 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 14:05:16.412516  597402 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.412525  597402 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.412561  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.412565  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.419769  597402 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 14:05:16.419794  597402 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 14:05:16.419813  597402 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.419823  597402 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 14:05:16.419852  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.419890  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.419857  597402 ssh_runner.go:195] Run: which crictl
	I0127 14:05:16.419865  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.419955  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.421409  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.424667  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.523655  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.523666  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.523790  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.523854  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:05:16.523997  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.524038  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.669306  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.669348  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.669372  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:05:16.669408  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:05:16.669432  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:05:16.669491  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:05:16.669536  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:05:16.820205  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 14:05:16.820275  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:05:16.820300  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 14:05:16.820394  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:05:16.820415  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 14:05:16.820476  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 14:05:16.820586  597402 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:05:16.859345  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 14:05:16.873300  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 14:05:16.879760  597402 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 14:05:16.944547  597402 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:05:17.087504  597402 cache_images.go:92] duration metric: took 1.034347879s to LoadCachedImages
	W0127 14:05:17.087647  597402 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0127 14:05:17.087675  597402 kubeadm.go:934] updating node { 192.168.83.145 8443 v1.20.0 crio true true} ...
	I0127 14:05:17.087834  597402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-225004 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:05:17.087920  597402 ssh_runner.go:195] Run: crio config
	I0127 14:05:17.146385  597402 cni.go:84] Creating CNI manager for ""
	I0127 14:05:17.146413  597402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:17.146427  597402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:05:17.146452  597402 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.145 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225004 NodeName:kubernetes-upgrade-225004 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 14:05:17.146640  597402 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225004"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:05:17.146714  597402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 14:05:17.156904  597402 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:05:17.156973  597402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:05:17.168729  597402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0127 14:05:17.185287  597402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:05:17.201689  597402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0127 14:05:17.221092  597402 ssh_runner.go:195] Run: grep 192.168.83.145	control-plane.minikube.internal$ /etc/hosts
	I0127 14:05:17.225270  597402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:05:17.240177  597402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:05:17.378096  597402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:05:17.396030  597402 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004 for IP: 192.168.83.145
	I0127 14:05:17.396050  597402 certs.go:194] generating shared ca certs ...
	I0127 14:05:17.396071  597402 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:17.396264  597402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:05:17.396373  597402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:05:17.396395  597402 certs.go:256] generating profile certs ...
	I0127 14:05:17.396481  597402 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.key
	I0127 14:05:17.396502  597402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.crt with IP's: []
	I0127 14:05:17.516449  597402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.crt ...
	I0127 14:05:17.516490  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.crt: {Name:mk0ecdcc408f25cc464a9d9e44a104fed2456de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:17.516707  597402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.key ...
	I0127 14:05:17.516731  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.key: {Name:mkb74ecb4ac75feb394494f59b03dab07aab979e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:17.516872  597402 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key.2810f21f
	I0127 14:05:17.516896  597402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt.2810f21f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.145]
	I0127 14:05:17.819887  597402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt.2810f21f ...
	I0127 14:05:17.819923  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt.2810f21f: {Name:mk4ba0f60d01adef3ec94552fca5a7909f97bcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:17.820112  597402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key.2810f21f ...
	I0127 14:05:17.820132  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key.2810f21f: {Name:mk23af129eb1515ee2c562ef0a33b221db659001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:17.820243  597402 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt.2810f21f -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt
	I0127 14:05:17.820317  597402 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key.2810f21f -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key
	I0127 14:05:17.820371  597402 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key
	I0127 14:05:17.820387  597402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.crt with IP's: []
	I0127 14:05:18.099130  597402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.crt ...
	I0127 14:05:18.099161  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.crt: {Name:mkd0b314e8a422040c294622ccc92065cb91fc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:18.099333  597402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key ...
	I0127 14:05:18.099349  597402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key: {Name:mkdd34d9d41e49233746908a03db252927c47b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:18.099552  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:05:18.099593  597402 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:05:18.099604  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:05:18.099624  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:05:18.099646  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:05:18.099667  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:05:18.099703  597402 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:05:18.100270  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:05:18.125709  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:05:18.153131  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:05:18.180304  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:05:18.204954  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 14:05:18.234365  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:05:18.265883  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:05:18.299138  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:05:18.328190  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:05:18.361068  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:05:18.383828  597402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:05:18.407473  597402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:05:18.424378  597402 ssh_runner.go:195] Run: openssl version
	I0127 14:05:18.430053  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:05:18.440813  597402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:05:18.445346  597402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:05:18.445404  597402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:05:18.451348  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:05:18.461985  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:05:18.473396  597402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:05:18.477839  597402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:05:18.477879  597402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:05:18.483273  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:05:18.494227  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:05:18.504653  597402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:05:18.509055  597402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:05:18.509093  597402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:05:18.514497  597402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:05:18.525177  597402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:05:18.529340  597402 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:05:18.529398  597402 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-225004 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:05:18.529483  597402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:05:18.529544  597402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:05:18.571661  597402 cri.go:89] found id: ""
	I0127 14:05:18.571716  597402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:05:18.581239  597402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:05:18.591161  597402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:05:18.600892  597402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:05:18.600907  597402 kubeadm.go:157] found existing configuration files:
	
	I0127 14:05:18.600938  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:05:18.612314  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:05:18.612365  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:05:18.621967  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:05:18.631592  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:05:18.631647  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:05:18.643179  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:05:18.653538  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:05:18.653609  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:05:18.664816  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:05:18.674753  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:05:18.674809  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:05:18.685042  597402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:05:18.825112  597402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:05:18.825281  597402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:05:18.996219  597402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:05:18.996392  597402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:05:18.996519  597402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:05:19.236641  597402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:05:19.349602  597402 out.go:235]   - Generating certificates and keys ...
	I0127 14:05:19.349792  597402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:05:19.349916  597402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:05:19.364205  597402 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:05:19.469471  597402 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:05:19.743546  597402 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:05:19.882157  597402 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:05:19.978569  597402 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:05:19.979023  597402 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	I0127 14:05:20.073657  597402 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:05:20.074004  597402 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	I0127 14:05:20.186590  597402 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:05:20.283130  597402 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:05:20.591897  597402 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:05:20.592169  597402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:05:20.727257  597402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:05:20.997311  597402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:05:21.119254  597402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:05:21.187254  597402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:05:21.202723  597402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:05:21.204326  597402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:05:21.204409  597402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:05:21.344072  597402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:05:21.345806  597402 out.go:235]   - Booting up control plane ...
	I0127 14:05:21.345948  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:05:21.351994  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:05:21.352119  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:05:21.353592  597402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:05:21.360503  597402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:06:01.350449  597402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:06:01.351715  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:06:01.351926  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:06:06.351828  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:06:06.352344  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:06:16.352470  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:06:16.352750  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:06:36.352934  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:06:36.353209  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:07:16.354448  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:07:16.354744  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:07:16.354764  597402 kubeadm.go:310] 
	I0127 14:07:16.354811  597402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:07:16.354867  597402 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:07:16.354877  597402 kubeadm.go:310] 
	I0127 14:07:16.354926  597402 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:07:16.354983  597402 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:07:16.355123  597402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:07:16.355134  597402 kubeadm.go:310] 
	I0127 14:07:16.355269  597402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:07:16.355339  597402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:07:16.355385  597402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:07:16.355396  597402 kubeadm.go:310] 
	I0127 14:07:16.355510  597402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:07:16.355618  597402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:07:16.355628  597402 kubeadm.go:310] 
	I0127 14:07:16.355751  597402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:07:16.355864  597402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:07:16.355965  597402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:07:16.356060  597402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:07:16.356070  597402 kubeadm.go:310] 
	I0127 14:07:16.356601  597402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:07:16.356731  597402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:07:16.356795  597402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 14:07:16.356954  597402 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-225004 localhost] and IPs [192.168.83.145 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 14:07:16.356999  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:07:16.861520  597402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:07:16.879271  597402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:07:16.892925  597402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:07:16.892945  597402 kubeadm.go:157] found existing configuration files:
	
	I0127 14:07:16.892994  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:07:16.905161  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:07:16.905231  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:07:16.916890  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:07:16.929668  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:07:16.929756  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:07:16.940496  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:07:16.954379  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:07:16.954439  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:07:16.965702  597402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:07:16.976914  597402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:07:16.976956  597402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:07:16.990422  597402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:07:17.081668  597402 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:07:17.081855  597402 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:07:17.244143  597402 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:07:17.244342  597402 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:07:17.244500  597402 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:07:17.486116  597402 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:07:17.487683  597402 out.go:235]   - Generating certificates and keys ...
	I0127 14:07:17.487806  597402 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:07:17.487895  597402 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:07:17.488011  597402 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:07:17.488093  597402 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:07:17.488271  597402 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:07:17.488349  597402 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:07:17.489022  597402 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:07:17.489387  597402 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:07:17.489985  597402 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:07:17.490406  597402 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:07:17.490472  597402 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:07:17.490558  597402 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:07:17.856081  597402 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:07:18.147610  597402 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:07:18.257005  597402 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:07:18.413664  597402 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:07:18.440165  597402 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:07:18.440312  597402 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:07:18.440367  597402 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:07:18.613892  597402 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:07:18.615257  597402 out.go:235]   - Booting up control plane ...
	I0127 14:07:18.615440  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:07:18.619860  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:07:18.621420  597402 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:07:18.623363  597402 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:07:18.630167  597402 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:07:58.627963  597402 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:07:58.628075  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:07:58.628346  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:08:03.628943  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:08:03.629244  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:08:13.629195  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:08:13.629504  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:08:33.630078  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:08:33.630284  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:13.631631  597402 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:13.631953  597402 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:13.631986  597402 kubeadm.go:310] 
	I0127 14:09:13.632040  597402 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:09:13.632085  597402 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:09:13.632102  597402 kubeadm.go:310] 
	I0127 14:09:13.632148  597402 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:09:13.632202  597402 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:09:13.632346  597402 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:09:13.632360  597402 kubeadm.go:310] 
	I0127 14:09:13.632500  597402 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:09:13.632566  597402 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:09:13.632626  597402 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:09:13.632637  597402 kubeadm.go:310] 
	I0127 14:09:13.632781  597402 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:09:13.632898  597402 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:09:13.632914  597402 kubeadm.go:310] 
	I0127 14:09:13.633065  597402 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:09:13.633207  597402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:09:13.633322  597402 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:09:13.633388  597402 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:09:13.633396  597402 kubeadm.go:310] 
	I0127 14:09:13.633895  597402 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:09:13.634007  597402 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:09:13.634118  597402 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:09:13.634199  597402 kubeadm.go:394] duration metric: took 3m55.104807001s to StartCluster
	I0127 14:09:13.634244  597402 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:09:13.634301  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:09:13.689699  597402 cri.go:89] found id: ""
	I0127 14:09:13.689723  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.689731  597402 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:09:13.689737  597402 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:09:13.689807  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:09:13.728931  597402 cri.go:89] found id: ""
	I0127 14:09:13.728950  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.728961  597402 logs.go:284] No container was found matching "etcd"
	I0127 14:09:13.728968  597402 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:09:13.729029  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:09:13.761797  597402 cri.go:89] found id: ""
	I0127 14:09:13.761821  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.761828  597402 logs.go:284] No container was found matching "coredns"
	I0127 14:09:13.761833  597402 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:09:13.761888  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:09:13.799972  597402 cri.go:89] found id: ""
	I0127 14:09:13.799999  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.800008  597402 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:09:13.800017  597402 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:09:13.800077  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:09:13.832110  597402 cri.go:89] found id: ""
	I0127 14:09:13.832141  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.832151  597402 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:09:13.832158  597402 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:09:13.832222  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:09:13.870171  597402 cri.go:89] found id: ""
	I0127 14:09:13.870196  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.870206  597402 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:09:13.870213  597402 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:09:13.870283  597402 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:09:13.902504  597402 cri.go:89] found id: ""
	I0127 14:09:13.902533  597402 logs.go:282] 0 containers: []
	W0127 14:09:13.902539  597402 logs.go:284] No container was found matching "kindnet"
	I0127 14:09:13.902550  597402 logs.go:123] Gathering logs for container status ...
	I0127 14:09:13.902561  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:09:13.942247  597402 logs.go:123] Gathering logs for kubelet ...
	I0127 14:09:13.942272  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:09:13.995061  597402 logs.go:123] Gathering logs for dmesg ...
	I0127 14:09:13.995090  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:09:14.007562  597402 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:09:14.007580  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:09:14.121200  597402 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:09:14.121225  597402 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:09:14.121238  597402 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 14:09:14.227191  597402 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 14:09:14.227256  597402 out.go:270] * 
	* 
	W0127 14:09:14.227326  597402 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:09:14.227345  597402 out.go:270] * 
	* 
	W0127 14:09:14.228218  597402 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 14:09:14.230732  597402 out.go:201] 
	W0127 14:09:14.231846  597402 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:09:14.231898  597402 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 14:09:14.231927  597402 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 14:09:14.233277  597402 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-225004
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-225004: (1.359659804s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-225004 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-225004 status --format={{.Host}}: exit status 7 (65.43696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.121935498s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-225004 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.03912ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-225004
	    minikube start -p kubernetes-upgrade-225004 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2250042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-225004 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 14:10:34.434807  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m52.521672816s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-225004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-225004" primary control-plane node in "kubernetes-upgrade-225004" cluster
	* Updating the running kvm2 "kubernetes-upgrade-225004" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:09:55.014808  603347 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:09:55.015043  603347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:09:55.015052  603347 out.go:358] Setting ErrFile to fd 2...
	I0127 14:09:55.015057  603347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:09:55.015240  603347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:09:55.015717  603347 out.go:352] Setting JSON to false
	I0127 14:09:55.016614  603347 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17540,"bootTime":1737969455,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:09:55.016720  603347 start.go:139] virtualization: kvm guest
	I0127 14:09:55.018263  603347 out.go:177] * [kubernetes-upgrade-225004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:09:55.019815  603347 notify.go:220] Checking for updates...
	I0127 14:09:55.019818  603347 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:09:55.020967  603347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:09:55.022156  603347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:09:55.023277  603347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:09:55.024287  603347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:09:55.025285  603347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:09:55.026603  603347 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:09:55.027025  603347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:09:55.027081  603347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:09:55.043721  603347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0127 14:09:55.044145  603347 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:09:55.044751  603347 main.go:141] libmachine: Using API Version  1
	I0127 14:09:55.044771  603347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:09:55.045110  603347 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:09:55.045348  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:55.045650  603347 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:09:55.045926  603347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:09:55.045976  603347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:09:55.060490  603347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0127 14:09:55.060827  603347 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:09:55.061303  603347 main.go:141] libmachine: Using API Version  1
	I0127 14:09:55.061329  603347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:09:55.061755  603347 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:09:55.061981  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:55.094491  603347 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:09:55.095586  603347 start.go:297] selected driver: kvm2
	I0127 14:09:55.095600  603347 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-up
grade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:09:55.095680  603347 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:09:55.096405  603347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:09:55.096468  603347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:09:55.110109  603347 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:09:55.110483  603347 cni.go:84] Creating CNI manager for ""
	I0127 14:09:55.110536  603347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:09:55.110566  603347 start.go:340] cluster config:
	{Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:09:55.110685  603347 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:09:55.112122  603347 out.go:177] * Starting "kubernetes-upgrade-225004" primary control-plane node in "kubernetes-upgrade-225004" cluster
	I0127 14:09:55.113176  603347 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:09:55.113219  603347 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:09:55.113230  603347 cache.go:56] Caching tarball of preloaded images
	I0127 14:09:55.113328  603347 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:09:55.113342  603347 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:09:55.113427  603347 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/config.json ...
	I0127 14:09:55.113623  603347 start.go:360] acquireMachinesLock for kubernetes-upgrade-225004: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:09:55.113668  603347 start.go:364] duration metric: took 26.525µs to acquireMachinesLock for "kubernetes-upgrade-225004"
	I0127 14:09:55.113682  603347 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:09:55.113700  603347 fix.go:54] fixHost starting: 
	I0127 14:09:55.113945  603347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:09:55.113975  603347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:09:55.126626  603347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0127 14:09:55.126985  603347 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:09:55.127485  603347 main.go:141] libmachine: Using API Version  1
	I0127 14:09:55.127504  603347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:09:55.127780  603347 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:09:55.127999  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:55.128143  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetState
	I0127 14:09:55.129487  603347 fix.go:112] recreateIfNeeded on kubernetes-upgrade-225004: state=Running err=<nil>
	W0127 14:09:55.129503  603347 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:09:55.130933  603347 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-225004" VM ...
	I0127 14:09:55.131978  603347 machine.go:93] provisionDockerMachine start ...
	I0127 14:09:55.131997  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:55.132155  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.134685  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.135076  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.135101  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.135240  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:55.135411  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.135550  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.135684  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:55.135797  603347 main.go:141] libmachine: Using SSH client type: native
	I0127 14:09:55.135975  603347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:09:55.135984  603347 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:09:55.257535  603347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225004
	
	I0127 14:09:55.257572  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:09:55.257801  603347 buildroot.go:166] provisioning hostname "kubernetes-upgrade-225004"
	I0127 14:09:55.257824  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:09:55.257982  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.260434  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.260733  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.260763  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.260908  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:55.261058  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.261223  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.261336  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:55.261513  603347 main.go:141] libmachine: Using SSH client type: native
	I0127 14:09:55.261717  603347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:09:55.261730  603347 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-225004 && echo "kubernetes-upgrade-225004" | sudo tee /etc/hostname
	I0127 14:09:55.404763  603347 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-225004
	
	I0127 14:09:55.404795  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.407880  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.408309  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.408346  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.408468  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:55.408633  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.408759  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.408947  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:55.409123  603347 main.go:141] libmachine: Using SSH client type: native
	I0127 14:09:55.409293  603347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:09:55.409309  603347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-225004' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-225004/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-225004' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:09:55.553172  603347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:09:55.553213  603347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:09:55.553252  603347 buildroot.go:174] setting up certificates
	I0127 14:09:55.553266  603347 provision.go:84] configureAuth start
	I0127 14:09:55.553286  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetMachineName
	I0127 14:09:55.553626  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:09:55.556704  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.557195  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.557227  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.557429  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.560274  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.560698  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.560737  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.560893  603347 provision.go:143] copyHostCerts
	I0127 14:09:55.560953  603347 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:09:55.560967  603347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:09:55.561034  603347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:09:55.561173  603347 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:09:55.561186  603347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:09:55.561224  603347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:09:55.561344  603347 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:09:55.561363  603347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:09:55.561397  603347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:09:55.561490  603347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-225004 san=[127.0.0.1 192.168.83.145 kubernetes-upgrade-225004 localhost minikube]
	I0127 14:09:55.708718  603347 provision.go:177] copyRemoteCerts
	I0127 14:09:55.708767  603347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:09:55.708790  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.711058  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.711408  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.711441  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.711590  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:55.711793  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.711943  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:55.712112  603347 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:09:55.810638  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:09:55.849508  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 14:09:55.890952  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:09:55.928068  603347 provision.go:87] duration metric: took 374.786057ms to configureAuth
	I0127 14:09:55.928100  603347 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:09:55.928265  603347 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:09:55.928354  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:55.930873  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.931228  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:55.931258  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:55.931469  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:55.931699  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.931884  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:55.932057  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:55.932245  603347 main.go:141] libmachine: Using SSH client type: native
	I0127 14:09:55.932478  603347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:09:55.932503  603347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:09:57.061317  603347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:09:57.061353  603347 machine.go:96] duration metric: took 1.929358568s to provisionDockerMachine
	I0127 14:09:57.061375  603347 start.go:293] postStartSetup for "kubernetes-upgrade-225004" (driver="kvm2")
	I0127 14:09:57.061392  603347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:09:57.061439  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:57.061763  603347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:09:57.061803  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:57.064402  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.064750  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:57.064780  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.064920  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:57.065107  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:57.065297  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:57.065497  603347 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:09:57.151305  603347 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:09:57.155841  603347 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:09:57.155861  603347 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:09:57.155908  603347 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:09:57.155990  603347 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:09:57.156076  603347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:09:57.166113  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:09:57.189930  603347 start.go:296] duration metric: took 128.540887ms for postStartSetup
	I0127 14:09:57.189965  603347 fix.go:56] duration metric: took 2.076274104s for fixHost
	I0127 14:09:57.189981  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:57.192170  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.192487  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:57.192512  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.192662  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:57.192827  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:57.192990  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:57.193108  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:57.193267  603347 main.go:141] libmachine: Using SSH client type: native
	I0127 14:09:57.193456  603347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.83.145 22 <nil> <nil>}
	I0127 14:09:57.193469  603347 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:09:57.305669  603347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986997.295258589
	
	I0127 14:09:57.305690  603347 fix.go:216] guest clock: 1737986997.295258589
	I0127 14:09:57.305711  603347 fix.go:229] Guest: 2025-01-27 14:09:57.295258589 +0000 UTC Remote: 2025-01-27 14:09:57.189969638 +0000 UTC m=+2.210651492 (delta=105.288951ms)
	I0127 14:09:57.305731  603347 fix.go:200] guest clock delta is within tolerance: 105.288951ms
	I0127 14:09:57.305736  603347 start.go:83] releasing machines lock for "kubernetes-upgrade-225004", held for 2.192060224s
	I0127 14:09:57.305754  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:57.305948  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:09:57.308276  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.308619  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:57.308648  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.308782  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:57.309259  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:57.309435  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .DriverName
	I0127 14:09:57.309537  603347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:09:57.309592  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:57.309623  603347 ssh_runner.go:195] Run: cat /version.json
	I0127 14:09:57.309649  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHHostname
	I0127 14:09:57.311981  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.312364  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:57.312402  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.312426  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.312581  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:57.312774  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:57.312957  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:57.312995  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:09:57.313027  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:09:57.313109  603347 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:09:57.313281  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHPort
	I0127 14:09:57.313414  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHKeyPath
	I0127 14:09:57.313543  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetSSHUsername
	I0127 14:09:57.313718  603347 sshutil.go:53] new ssh client: &{IP:192.168.83.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/kubernetes-upgrade-225004/id_rsa Username:docker}
	I0127 14:09:57.441459  603347 ssh_runner.go:195] Run: systemctl --version
	I0127 14:09:57.498013  603347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:09:57.767516  603347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:09:57.806974  603347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:09:57.807053  603347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:09:57.898062  603347 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 14:09:57.898090  603347 start.go:495] detecting cgroup driver to use...
	I0127 14:09:57.898181  603347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:09:57.970972  603347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:09:58.014573  603347 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:09:58.014654  603347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:09:58.109077  603347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:09:58.140191  603347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:09:58.408782  603347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:09:58.675987  603347 docker.go:233] disabling docker service ...
	I0127 14:09:58.676063  603347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:09:58.695944  603347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:09:58.712312  603347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:09:58.905085  603347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:09:59.130836  603347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:09:59.163329  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:09:59.213624  603347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:09:59.213696  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.229562  603347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:09:59.229631  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.244743  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.259748  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.273944  603347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:09:59.287600  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.301310  603347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.314365  603347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:09:59.327213  603347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:09:59.344794  603347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:09:59.357346  603347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:09:59.556462  603347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:11:30.036350  603347 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.479844944s)
	I0127 14:11:30.036386  603347 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:11:30.036450  603347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:11:30.045373  603347 start.go:563] Will wait 60s for crictl version
	I0127 14:11:30.045448  603347 ssh_runner.go:195] Run: which crictl
	I0127 14:11:30.051192  603347 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:11:30.120022  603347 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:11:30.120122  603347 ssh_runner.go:195] Run: crio --version
	I0127 14:11:30.165534  603347 ssh_runner.go:195] Run: crio --version
	I0127 14:11:30.210940  603347 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:11:30.212243  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) Calling .GetIP
	I0127 14:11:30.216079  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:11:30.216670  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:c8:d3", ip: ""} in network mk-kubernetes-upgrade-225004: {Iface:virbr1 ExpiryTime:2025-01-27 15:09:26 +0000 UTC Type:0 Mac:52:54:00:c2:c8:d3 Iaid: IPaddr:192.168.83.145 Prefix:24 Hostname:kubernetes-upgrade-225004 Clientid:01:52:54:00:c2:c8:d3}
	I0127 14:11:30.216698  603347 main.go:141] libmachine: (kubernetes-upgrade-225004) DBG | domain kubernetes-upgrade-225004 has defined IP address 192.168.83.145 and MAC address 52:54:00:c2:c8:d3 in network mk-kubernetes-upgrade-225004
	I0127 14:11:30.217072  603347 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0127 14:11:30.223514  603347 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-225004 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:11:30.223655  603347 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:11:30.223725  603347 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:11:30.283542  603347 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:11:30.283576  603347 crio.go:433] Images already preloaded, skipping extraction
	I0127 14:11:30.283654  603347 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:11:30.323774  603347 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:11:30.323801  603347 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:11:30.323809  603347 kubeadm.go:934] updating node { 192.168.83.145 8443 v1.32.1 crio true true} ...
	I0127 14:11:30.324450  603347 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-225004 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-225004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:11:30.324563  603347 ssh_runner.go:195] Run: crio config
	I0127 14:11:30.384594  603347 cni.go:84] Creating CNI manager for ""
	I0127 14:11:30.384623  603347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:11:30.384637  603347 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:11:30.384668  603347 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.145 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-225004 NodeName:kubernetes-upgrade-225004 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:11:30.384855  603347 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-225004"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.145"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.145"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:11:30.384938  603347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:11:30.396374  603347 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:11:30.396443  603347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:11:30.411139  603347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0127 14:11:30.435962  603347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:11:30.458797  603347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0127 14:11:30.482460  603347 ssh_runner.go:195] Run: grep 192.168.83.145	control-plane.minikube.internal$ /etc/hosts
	I0127 14:11:30.487854  603347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:11:30.703251  603347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:11:30.731255  603347 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004 for IP: 192.168.83.145
	I0127 14:11:30.731291  603347 certs.go:194] generating shared ca certs ...
	I0127 14:11:30.731318  603347 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:11:30.731535  603347 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:11:30.731613  603347 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:11:30.731632  603347 certs.go:256] generating profile certs ...
	I0127 14:11:30.731757  603347 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/client.key
	I0127 14:11:30.731829  603347 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key.2810f21f
	I0127 14:11:30.731908  603347 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key
	I0127 14:11:30.732080  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:11:30.732128  603347 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:11:30.732144  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:11:30.732170  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:11:30.732209  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:11:30.732251  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:11:30.732321  603347 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:11:30.733293  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:11:30.825356  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:11:30.937485  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:11:31.192131  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:11:31.350551  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 14:11:31.417600  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:11:31.457429  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:11:31.527013  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kubernetes-upgrade-225004/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:11:31.598968  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:11:31.660224  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:11:31.727481  603347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:11:31.792168  603347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:11:31.811203  603347 ssh_runner.go:195] Run: openssl version
	I0127 14:11:31.817696  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:11:31.829024  603347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:31.833511  603347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:31.833571  603347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:31.839662  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:11:31.849118  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:11:31.859980  603347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:11:31.864990  603347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:11:31.865028  603347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:11:31.884760  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:11:31.896245  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:11:31.906659  603347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:11:31.911058  603347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:11:31.911113  603347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:11:31.916649  603347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:11:31.926874  603347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:11:31.931671  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:11:31.938289  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:11:31.944007  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:11:31.949925  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:11:31.955621  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:11:31.961277  603347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:11:31.967024  603347 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-225004 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-225004 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:11:31.967129  603347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:11:31.967184  603347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:11:32.003712  603347 cri.go:89] found id: "34f96968435fbb5ff0ea0a3c233264d2190d0d27581e8058e05c47746c31623f"
	I0127 14:11:32.003735  603347 cri.go:89] found id: "e50b10b7a9b1f74ef3613521e9ee3f2c990073bbbc901db9c5ee055ea011637d"
	I0127 14:11:32.003741  603347 cri.go:89] found id: "ea628851f1c3f8e53e0bc6dbc10d0247cc567c8a4f8607c9e0a721362637a521"
	I0127 14:11:32.003747  603347 cri.go:89] found id: "f065eaa07f8d6047fb47c3e6abeb5abf5f48246f72d98f390dc2355fa5ba2c30"
	I0127 14:11:32.003751  603347 cri.go:89] found id: "bde71e44c65b070787b0e32ef54df50af91bc8418c1ee14042cf7c0ff90820d2"
	I0127 14:11:32.003756  603347 cri.go:89] found id: "d0ef3f9c1bb3130bc12c03bffd48d95f2db5e7f015762a174979a5dd28f4eadd"
	I0127 14:11:32.003760  603347 cri.go:89] found id: "b0a0c9cbc063064fc8aa47984025ec55adc237ffd00e06bd0a2e7bc8dd50c97d"
	I0127 14:11:32.003763  603347 cri.go:89] found id: "7b81e36e5252643006d2096236b88b51b468ffd2d66c84df841e23f434ed1fa4"
	I0127 14:11:32.003766  603347 cri.go:89] found id: "6dbb64eb47e0a63882f3e3d260550be3939ed4a6ccb001984cd2f6a41e0547b1"
	I0127 14:11:32.003777  603347 cri.go:89] found id: "4148216b3a97d4f836d3dcba976d3d23d7b3509bd47e3c7e9664ff799eb09977"
	I0127 14:11:32.003781  603347 cri.go:89] found id: "e20e32b0f1cb786feabc734f20c5d18a93f246becaf6285a3630093f160d7f5d"
	I0127 14:11:32.003788  603347 cri.go:89] found id: "13a30bcdb8930ee4913101ed6f91bac37d47f089efd6a46c0d90c71a68bb8219"
	I0127 14:11:32.003794  603347 cri.go:89] found id: ""
	I0127 14:11:32.003840  603347 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-225004 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 14:23:47.508981195 +0000 UTC m=+4836.417869803
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-225004 -n kubernetes-upgrade-225004
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-225004 -n kubernetes-upgrade-225004: exit status 2 (255.383584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-225004 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-456130        | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-456130                              | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-456130             | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-456130                              | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | no-preload-183205 image list                           | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	| delete  | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-650791 | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | disable-driver-mounts-650791                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-379305 --memory=2200 --alsologtostderr   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-379305             | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-379305                  | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-379305 --memory=2200 --alsologtostderr   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-379305 image list                           | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| start   | -p                                                     | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:20 UTC |
	|         | default-k8s-diff-port-178758                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-178758  | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:20 UTC | 27 Jan 25 14:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:20 UTC | 27 Jan 25 14:22 UTC |
	|         | default-k8s-diff-port-178758                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-178758       | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:22 UTC | 27 Jan 25 14:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:22 UTC |                     |
	|         | default-k8s-diff-port-178758                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:22:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:22:20.395782  609255 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:22:20.396014  609255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:22:20.396022  609255 out.go:358] Setting ErrFile to fd 2...
	I0127 14:22:20.396027  609255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:22:20.396197  609255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:22:20.396675  609255 out.go:352] Setting JSON to false
	I0127 14:22:20.397658  609255 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":18285,"bootTime":1737969455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:22:20.397718  609255 start.go:139] virtualization: kvm guest
	I0127 14:22:20.399577  609255 out.go:177] * [default-k8s-diff-port-178758] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:22:20.401012  609255 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:22:20.401011  609255 notify.go:220] Checking for updates...
	I0127 14:22:20.402187  609255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:22:20.403315  609255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:22:20.404366  609255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:22:20.405406  609255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:22:20.406439  609255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:22:20.407767  609255 config.go:182] Loaded profile config "default-k8s-diff-port-178758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:22:20.408173  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:20.408231  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:20.424947  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0127 14:22:20.425312  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:20.425883  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:20.425923  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:20.426306  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:20.426472  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:20.426711  609255 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:22:20.427039  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:20.427086  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:20.441736  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0127 14:22:20.442085  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:20.442508  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:20.442527  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:20.442846  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:20.443057  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:20.476946  609255 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:22:20.477895  609255 start.go:297] selected driver: kvm2
	I0127 14:22:20.477921  609255 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:22:20.478026  609255 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:22:20.478801  609255 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:22:20.478883  609255 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:22:20.493533  609255 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:22:20.493921  609255 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:22:20.493958  609255 cni.go:84] Creating CNI manager for ""
	I0127 14:22:20.494020  609255 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:22:20.494079  609255 start.go:340] cluster config:
	{Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:22:20.494229  609255 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:22:20.495706  609255 out.go:177] * Starting "default-k8s-diff-port-178758" primary control-plane node in "default-k8s-diff-port-178758" cluster
	I0127 14:22:20.496738  609255 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:22:20.496772  609255 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:22:20.496789  609255 cache.go:56] Caching tarball of preloaded images
	I0127 14:22:20.496868  609255 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:22:20.496878  609255 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:22:20.496960  609255 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/config.json ...
	I0127 14:22:20.497149  609255 start.go:360] acquireMachinesLock for default-k8s-diff-port-178758: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:22:20.497209  609255 start.go:364] duration metric: took 36.589µs to acquireMachinesLock for "default-k8s-diff-port-178758"
	I0127 14:22:20.497229  609255 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:22:20.497238  609255 fix.go:54] fixHost starting: 
	I0127 14:22:20.497657  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:20.497706  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:20.511052  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0127 14:22:20.511455  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:20.511891  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:20.511919  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:20.512207  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:20.512420  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:20.512575  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:20.514015  609255 fix.go:112] recreateIfNeeded on default-k8s-diff-port-178758: state=Stopped err=<nil>
	I0127 14:22:20.514037  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	W0127 14:22:20.514174  609255 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:22:20.515570  609255 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-178758" ...
	I0127 14:22:20.516617  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Start
	I0127 14:22:20.516754  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) starting domain...
	I0127 14:22:20.516772  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) ensuring networks are active...
	I0127 14:22:20.517546  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Ensuring network default is active
	I0127 14:22:20.517897  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Ensuring network mk-default-k8s-diff-port-178758 is active
	I0127 14:22:20.518223  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) getting domain XML...
	I0127 14:22:20.518866  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) creating domain...
	I0127 14:22:20.847565  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) waiting for IP...
	I0127 14:22:20.848536  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:20.849025  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:20.849163  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:20.849016  609290 retry.go:31] will retry after 268.431324ms: waiting for domain to come up
	I0127 14:22:21.119652  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.120268  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.120300  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:21.120223  609290 retry.go:31] will retry after 331.943794ms: waiting for domain to come up
	I0127 14:22:21.453827  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.454398  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.454424  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:21.454364  609290 retry.go:31] will retry after 383.132135ms: waiting for domain to come up
	I0127 14:22:21.839132  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.839699  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:21.839735  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:21.839671  609290 retry.go:31] will retry after 469.058172ms: waiting for domain to come up
	I0127 14:22:22.310277  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:22.310901  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:22.310937  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:22.310840  609290 retry.go:31] will retry after 719.379897ms: waiting for domain to come up
	I0127 14:22:23.031434  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:23.031882  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:23.031915  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:23.031846  609290 retry.go:31] will retry after 580.255104ms: waiting for domain to come up
	I0127 14:22:23.613470  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:23.613998  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:23.614028  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:23.613979  609290 retry.go:31] will retry after 762.187823ms: waiting for domain to come up
	I0127 14:22:24.378034  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:24.378476  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:24.378510  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:24.378456  609290 retry.go:31] will retry after 1.128780111s: waiting for domain to come up
	I0127 14:22:25.508553  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:25.509056  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:25.509126  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:25.509013  609290 retry.go:31] will retry after 1.240398254s: waiting for domain to come up
	I0127 14:22:26.751794  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:26.752337  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:26.752377  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:26.752316  609290 retry.go:31] will retry after 1.896107843s: waiting for domain to come up
	I0127 14:22:28.650807  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:28.651333  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:28.651373  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:28.651320  609290 retry.go:31] will retry after 2.118724648s: waiting for domain to come up
	I0127 14:22:30.771829  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:30.772338  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:30.772373  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:30.772314  609290 retry.go:31] will retry after 2.888130674s: waiting for domain to come up
	I0127 14:22:33.664270  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:33.664713  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:22:33.664743  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:22:33.664678  609290 retry.go:31] will retry after 3.625556044s: waiting for domain to come up
	I0127 14:22:37.291232  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.291684  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has current primary IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.291725  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) found domain IP: 192.168.50.187
	I0127 14:22:37.291738  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) reserving static IP address...
	I0127 14:22:37.292183  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) reserved static IP address 192.168.50.187 for domain default-k8s-diff-port-178758
	I0127 14:22:37.292228  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-178758", mac: "52:54:00:9e:12:0f", ip: "192.168.50.187"} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.292240  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) waiting for SSH...
	I0127 14:22:37.292269  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | skip adding static IP to network mk-default-k8s-diff-port-178758 - found existing host DHCP lease matching {name: "default-k8s-diff-port-178758", mac: "52:54:00:9e:12:0f", ip: "192.168.50.187"}
	I0127 14:22:37.292286  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Getting to WaitForSSH function...
	I0127 14:22:37.294304  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.294620  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.294650  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.294758  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Using SSH client type: external
	I0127 14:22:37.294808  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa (-rw-------)
	I0127 14:22:37.294849  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:22:37.294865  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | About to run SSH command:
	I0127 14:22:37.294876  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | exit 0
	I0127 14:22:37.417292  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | SSH cmd err, output: <nil>: 
	I0127 14:22:37.417660  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetConfigRaw
	I0127 14:22:37.418383  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:22:37.421211  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.421634  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.421663  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.421966  609255 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/config.json ...
	I0127 14:22:37.422211  609255 machine.go:93] provisionDockerMachine start ...
	I0127 14:22:37.422239  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:37.422469  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:37.424905  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.425242  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.425277  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.425444  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:37.425676  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.425865  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.426024  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:37.426180  609255 main.go:141] libmachine: Using SSH client type: native
	I0127 14:22:37.426438  609255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:22:37.426449  609255 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:22:37.533377  609255 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:22:37.533401  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:22:37.533650  609255 buildroot.go:166] provisioning hostname "default-k8s-diff-port-178758"
	I0127 14:22:37.533683  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:22:37.533893  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:37.536250  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.536591  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.536621  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.536772  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:37.536950  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.537100  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.537253  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:37.537381  609255 main.go:141] libmachine: Using SSH client type: native
	I0127 14:22:37.537605  609255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:22:37.537624  609255 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178758 && echo "default-k8s-diff-port-178758" | sudo tee /etc/hostname
	I0127 14:22:37.660637  609255 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178758
	
	I0127 14:22:37.660683  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:37.663394  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.663700  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.663730  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.663875  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:37.664065  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.664200  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.664303  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:37.664457  609255 main.go:141] libmachine: Using SSH client type: native
	I0127 14:22:37.664672  609255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:22:37.664701  609255 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178758/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:22:37.777994  609255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:22:37.778021  609255 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:22:37.778073  609255 buildroot.go:174] setting up certificates
	I0127 14:22:37.778083  609255 provision.go:84] configureAuth start
	I0127 14:22:37.778095  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:22:37.778381  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:22:37.780711  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.781029  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.781059  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.781198  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:37.783539  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.783892  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.783927  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.784004  609255 provision.go:143] copyHostCerts
	I0127 14:22:37.784066  609255 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:22:37.784089  609255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:22:37.784178  609255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:22:37.784308  609255 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:22:37.784321  609255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:22:37.784368  609255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:22:37.784451  609255 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:22:37.784462  609255 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:22:37.784495  609255 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:22:37.784556  609255 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178758 san=[127.0.0.1 192.168.50.187 default-k8s-diff-port-178758 localhost minikube]
	I0127 14:22:37.863057  609255 provision.go:177] copyRemoteCerts
	I0127 14:22:37.863104  609255 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:22:37.863130  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:37.865417  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.865739  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:37.865765  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:37.865922  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:37.866089  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:37.866236  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:37.866383  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:37.951031  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 14:22:37.975218  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:22:37.999061  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:22:38.022426  609255 provision.go:87] duration metric: took 244.327167ms to configureAuth
	I0127 14:22:38.022447  609255 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:22:38.022662  609255 config.go:182] Loaded profile config "default-k8s-diff-port-178758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:22:38.022754  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:38.025272  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.025529  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.025555  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.025742  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:38.025899  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.026069  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.026183  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:38.026331  609255 main.go:141] libmachine: Using SSH client type: native
	I0127 14:22:38.026496  609255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:22:38.026515  609255 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:22:38.281340  609255 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:22:38.281377  609255 machine.go:96] duration metric: took 859.147876ms to provisionDockerMachine
	I0127 14:22:38.281393  609255 start.go:293] postStartSetup for "default-k8s-diff-port-178758" (driver="kvm2")
	I0127 14:22:38.281407  609255 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:22:38.281435  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:38.281772  609255 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:22:38.281804  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:38.284336  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.284629  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.284683  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.284781  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:38.284949  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.285073  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:38.285192  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:38.368267  609255 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:22:38.373815  609255 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:22:38.373841  609255 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:22:38.373896  609255 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:22:38.373974  609255 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:22:38.374068  609255 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:22:38.384938  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:22:38.409544  609255 start.go:296] duration metric: took 128.136261ms for postStartSetup
	I0127 14:22:38.409606  609255 fix.go:56] duration metric: took 17.912366259s for fixHost
	I0127 14:22:38.409644  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:38.412585  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.412925  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.412948  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.413118  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:38.413320  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.413503  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.413648  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:38.413816  609255 main.go:141] libmachine: Using SSH client type: native
	I0127 14:22:38.413988  609255 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:22:38.413997  609255 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:22:38.522340  609255 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987758.494202031
	
	I0127 14:22:38.522372  609255 fix.go:216] guest clock: 1737987758.494202031
	I0127 14:22:38.522381  609255 fix.go:229] Guest: 2025-01-27 14:22:38.494202031 +0000 UTC Remote: 2025-01-27 14:22:38.409615429 +0000 UTC m=+18.052098903 (delta=84.586602ms)
	I0127 14:22:38.522428  609255 fix.go:200] guest clock delta is within tolerance: 84.586602ms
	I0127 14:22:38.522438  609255 start.go:83] releasing machines lock for "default-k8s-diff-port-178758", held for 18.02521705s
	I0127 14:22:38.522469  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:38.522747  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:22:38.525498  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.525975  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.526009  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.526156  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:38.526636  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:38.526805  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:38.526912  609255 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:22:38.526952  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:38.527057  609255 ssh_runner.go:195] Run: cat /version.json
	I0127 14:22:38.527106  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:38.529918  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.530164  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.530245  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.530275  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.530447  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:38.530617  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:38.530644  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:38.530646  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.530821  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:38.530820  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:38.531001  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:38.531111  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:38.531257  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:38.531411  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:38.629655  609255 ssh_runner.go:195] Run: systemctl --version
	I0127 14:22:38.635329  609255 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:22:38.781199  609255 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:22:38.787202  609255 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:22:38.787271  609255 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:22:38.802809  609255 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:22:38.802837  609255 start.go:495] detecting cgroup driver to use...
	I0127 14:22:38.802906  609255 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:22:38.818240  609255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:22:38.831376  609255 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:22:38.831434  609255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:22:38.845357  609255 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:22:38.859156  609255 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:22:38.975031  609255 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:22:39.132313  609255 docker.go:233] disabling docker service ...
	I0127 14:22:39.132380  609255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:22:39.146279  609255 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:22:39.159041  609255 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:22:39.281101  609255 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:22:39.408510  609255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:22:39.423661  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:22:39.442868  609255 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:22:39.442934  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.453328  609255 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:22:39.453408  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.463371  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.473291  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.483260  609255 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:22:39.494157  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.504526  609255 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.521732  609255 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:22:39.531796  609255 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:22:39.540874  609255 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:22:39.540930  609255 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:22:39.554057  609255 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:22:39.563082  609255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:22:39.685768  609255 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:22:39.789881  609255 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:22:39.789956  609255 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:22:39.795338  609255 start.go:563] Will wait 60s for crictl version
	I0127 14:22:39.795405  609255 ssh_runner.go:195] Run: which crictl
	I0127 14:22:39.799502  609255 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:22:39.840806  609255 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:22:39.840898  609255 ssh_runner.go:195] Run: crio --version
	I0127 14:22:39.867960  609255 ssh_runner.go:195] Run: crio --version
	I0127 14:22:39.901865  609255 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:22:39.902835  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:22:39.905886  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:39.906339  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:39.906384  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:39.906602  609255 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 14:22:39.910462  609255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:22:39.922755  609255 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178
758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:22:39.922891  609255 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:22:39.922938  609255 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:22:39.963508  609255 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:22:39.963555  609255 ssh_runner.go:195] Run: which lz4
	I0127 14:22:39.967241  609255 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:22:39.971277  609255 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:22:39.971301  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:22:41.334510  609255 crio.go:462] duration metric: took 1.367283091s to copy over tarball
	I0127 14:22:41.334603  609255 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:22:43.447279  609255 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.112639903s)
	I0127 14:22:43.447317  609255 crio.go:469] duration metric: took 2.112759996s to extract the tarball
	I0127 14:22:43.447328  609255 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:22:43.488683  609255 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:22:43.539224  609255 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:22:43.539255  609255 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:22:43.539268  609255 kubeadm.go:934] updating node { 192.168.50.187 8444 v1.32.1 crio true true} ...
	I0127 14:22:43.539419  609255 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-178758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:22:43.539511  609255 ssh_runner.go:195] Run: crio config
	I0127 14:22:43.586598  609255 cni.go:84] Creating CNI manager for ""
	I0127 14:22:43.586619  609255 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:22:43.586629  609255 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:22:43.586651  609255 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.187 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-178758 NodeName:default-k8s-diff-port-178758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:22:43.586770  609255 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.187
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-178758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:22:43.586830  609255 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:22:43.597086  609255 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:22:43.597148  609255 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:22:43.606896  609255 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0127 14:22:43.623158  609255 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:22:43.639839  609255 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 14:22:43.656682  609255 ssh_runner.go:195] Run: grep 192.168.50.187	control-plane.minikube.internal$ /etc/hosts
	I0127 14:22:43.660445  609255 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:22:43.672553  609255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:22:43.800277  609255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:22:43.817890  609255 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758 for IP: 192.168.50.187
	I0127 14:22:43.817912  609255 certs.go:194] generating shared ca certs ...
	I0127 14:22:43.817934  609255 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:22:43.818137  609255 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:22:43.818188  609255 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:22:43.818200  609255 certs.go:256] generating profile certs ...
	I0127 14:22:43.818301  609255 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.key
	I0127 14:22:43.818354  609255 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key.3789323f
	I0127 14:22:43.818388  609255 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key
	I0127 14:22:43.818491  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:22:43.818522  609255 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:22:43.818532  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:22:43.818557  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:22:43.818581  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:22:43.818604  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:22:43.818658  609255 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:22:43.819485  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:22:43.875286  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:22:43.915190  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:22:43.948246  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:22:43.976372  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 14:22:44.003156  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:22:44.026383  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:22:44.049740  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:22:44.076552  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:22:44.102554  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:22:44.139002  609255 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:22:44.166977  609255 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:22:44.184889  609255 ssh_runner.go:195] Run: openssl version
	I0127 14:22:44.190749  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:22:44.202874  609255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:22:44.207577  609255 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:22:44.207626  609255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:22:44.213792  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:22:44.225226  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:22:44.236428  609255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:22:44.241591  609255 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:22:44.241656  609255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:22:44.247345  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:22:44.258961  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:22:44.270361  609255 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:22:44.274998  609255 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:22:44.275045  609255 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:22:44.280467  609255 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:22:44.291403  609255 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:22:44.297535  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:22:44.304063  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:22:44.310198  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:22:44.316466  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:22:44.322484  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:22:44.328725  609255 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:22:44.334585  609255 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:22:44.334682  609255 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:22:44.334731  609255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:22:44.377284  609255 cri.go:89] found id: ""
	I0127 14:22:44.377344  609255 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:22:44.387138  609255 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:22:44.387155  609255 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:22:44.387188  609255 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:22:44.398227  609255 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:22:44.399534  609255 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-178758" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:22:44.400504  609255 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-555419/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-178758" cluster setting kubeconfig missing "default-k8s-diff-port-178758" context setting]
	I0127 14:22:44.401782  609255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:22:44.403543  609255 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:22:44.414424  609255 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.187
	I0127 14:22:44.414455  609255 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:22:44.414472  609255 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 14:22:44.414517  609255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:22:44.454370  609255 cri.go:89] found id: ""
	I0127 14:22:44.454412  609255 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:22:44.472968  609255 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:22:44.484075  609255 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:22:44.484090  609255 kubeadm.go:157] found existing configuration files:
	
	I0127 14:22:44.484133  609255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:22:44.494656  609255 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:22:44.494708  609255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:22:44.505439  609255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:22:44.514306  609255 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:22:44.514366  609255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:22:44.524041  609255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:22:44.533004  609255 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:22:44.533073  609255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:22:44.543134  609255 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:22:44.551563  609255 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:22:44.551599  609255 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:22:44.560875  609255 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:22:44.570338  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:44.694837  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:45.454891  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:46.058147  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:46.160749  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:46.258731  609255 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:22:46.258846  609255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:46.759840  609255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:47.259590  609255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:47.759705  609255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:47.774763  609255 api_server.go:72] duration metric: took 1.516032824s to wait for apiserver process to appear ...
	I0127 14:22:47.774792  609255 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:22:47.774814  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:50.554980  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:22:50.555016  609255 api_server.go:103] status: https://192.168.50.187:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:22:50.555032  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:50.578550  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:22:50.578584  609255 api_server.go:103] status: https://192.168.50.187:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:22:50.774972  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:50.782339  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:22:50.782364  609255 api_server.go:103] status: https://192.168.50.187:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:22:51.274988  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:51.290012  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:22:51.290056  609255 api_server.go:103] status: https://192.168.50.187:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:22:51.775761  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:51.788984  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:22:51.789021  609255 api_server.go:103] status: https://192.168.50.187:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:22:52.275700  609255 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:22:52.281900  609255 api_server.go:279] https://192.168.50.187:8444/healthz returned 200:
	ok
	I0127 14:22:52.290031  609255 api_server.go:141] control plane version: v1.32.1
	I0127 14:22:52.290056  609255 api_server.go:131] duration metric: took 4.515257279s to wait for apiserver health ...
	I0127 14:22:52.290066  609255 cni.go:84] Creating CNI manager for ""
	I0127 14:22:52.290076  609255 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:22:52.291558  609255 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:22:52.293256  609255 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:22:52.306602  609255 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:22:52.338902  609255 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:22:52.352057  609255 system_pods.go:59] 8 kube-system pods found
	I0127 14:22:52.352145  609255 system_pods.go:61] "coredns-668d6bf9bc-nxbp7" [1598d49e-31bd-4040-9517-342c41bdbfbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:22:52.352177  609255 system_pods.go:61] "etcd-default-k8s-diff-port-178758" [677f51de-78d4-4fcf-a379-aa2cdeae5c94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:22:52.352207  609255 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178758" [485b927e-cb6d-44f2-a8a3-99e9b04eb683] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:22:52.352224  609255 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178758" [a1c04b1b-a73f-4278-ba51-cf849f495fad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:22:52.352236  609255 system_pods.go:61] "kube-proxy-h9dzd" [6014094a-3b42-457c-a06b-9432d1029225] Running
	I0127 14:22:52.352248  609255 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178758" [7b3dbf93-f770-444b-a0b7-2fd807faef6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:22:52.352257  609255 system_pods.go:61] "metrics-server-f79f97bbb-vwkjg" [9480940b-5634-4924-a5bf-32bd4606c642] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:22:52.352266  609255 system_pods.go:61] "storage-provisioner" [e4090d6b-233e-4053-a355-3ad858d5b9b4] Running
	I0127 14:22:52.352278  609255 system_pods.go:74] duration metric: took 13.357885ms to wait for pod list to return data ...
	I0127 14:22:52.352290  609255 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:22:52.356120  609255 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:22:52.356150  609255 node_conditions.go:123] node cpu capacity is 2
	I0127 14:22:52.356166  609255 node_conditions.go:105] duration metric: took 3.867428ms to run NodePressure ...
	I0127 14:22:52.356190  609255 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:22:52.640658  609255 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 14:22:52.647595  609255 kubeadm.go:739] kubelet initialised
	I0127 14:22:52.647627  609255 kubeadm.go:740] duration metric: took 6.942099ms waiting for restarted kubelet to initialise ...
	I0127 14:22:52.647644  609255 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:22:52.652915  609255 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:52.661462  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.661481  609255 pod_ready.go:82] duration metric: took 8.536401ms for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:52.661490  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.661496  609255 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:52.667370  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.667400  609255 pod_ready.go:82] duration metric: took 5.892824ms for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:52.667415  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.667425  609255 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:52.672131  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.672160  609255 pod_ready.go:82] duration metric: took 4.721751ms for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:52.672172  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.672181  609255 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:52.742900  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.742938  609255 pod_ready.go:82] duration metric: took 70.747807ms for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:52.742955  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:52.742966  609255 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:53.142017  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "kube-proxy-h9dzd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.142049  609255 pod_ready.go:82] duration metric: took 399.067942ms for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:53.142062  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "kube-proxy-h9dzd" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.142071  609255 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:53.541946  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.541975  609255 pod_ready.go:82] duration metric: took 399.893045ms for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:53.541994  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.542009  609255 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace to be "Ready" ...
	I0127 14:22:53.942484  609255 pod_ready.go:98] node "default-k8s-diff-port-178758" hosting pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.942512  609255 pod_ready.go:82] duration metric: took 400.493622ms for pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace to be "Ready" ...
	E0127 14:22:53.942525  609255 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-178758" hosting pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:53.942534  609255 pod_ready.go:39] duration metric: took 1.294876975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:22:53.942557  609255 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:22:53.955334  609255 ops.go:34] apiserver oom_adj: -16
	I0127 14:22:53.955356  609255 kubeadm.go:597] duration metric: took 9.56819371s to restartPrimaryControlPlane
	I0127 14:22:53.955367  609255 kubeadm.go:394] duration metric: took 9.620787072s to StartCluster
	I0127 14:22:53.955390  609255 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:22:53.955457  609255 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:22:53.957170  609255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:22:53.957385  609255 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:22:53.957456  609255 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:22:53.957545  609255 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-178758"
	I0127 14:22:53.957569  609255 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-178758"
	W0127 14:22:53.957596  609255 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:22:53.957606  609255 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-178758"
	I0127 14:22:53.957612  609255 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-178758"
	I0127 14:22:53.957638  609255 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:22:53.957643  609255 config.go:182] Loaded profile config "default-k8s-diff-port-178758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:22:53.957652  609255 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-178758"
	W0127 14:22:53.957663  609255 addons.go:247] addon metrics-server should already be in state true
	I0127 14:22:53.957643  609255 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-178758"
	I0127 14:22:53.957697  609255 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:22:53.957621  609255 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-178758"
	I0127 14:22:53.957732  609255 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-178758"
	W0127 14:22:53.957747  609255 addons.go:247] addon dashboard should already be in state true
	I0127 14:22:53.957785  609255 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:22:53.958028  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.958070  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.958096  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.958115  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.958136  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.958177  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.958219  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.958256  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.958782  609255 out.go:177] * Verifying Kubernetes components...
	I0127 14:22:53.959965  609255 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:22:53.974876  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I0127 14:22:53.974880  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0127 14:22:53.975445  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:53.975493  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:53.976057  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:53.976076  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:53.976096  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:53.976113  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0127 14:22:53.976078  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:53.976471  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:53.976518  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:53.976556  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:53.977050  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.977051  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.977095  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.977139  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.977442  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45673
	I0127 14:22:53.977784  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:53.978066  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:53.978090  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:53.978247  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:53.978272  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:53.978707  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:53.978712  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:53.978950  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:53.979136  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.979180  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.983046  609255 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-178758"
	W0127 14:22:53.983070  609255 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:22:53.983097  609255 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:22:53.983462  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:53.983499  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:53.992525  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I0127 14:22:53.995200  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0127 14:22:53.997295  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38561
	I0127 14:22:54.006011  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:54.006104  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:54.006131  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:54.006499  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:54.006518  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:54.006639  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:54.006642  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:54.006652  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:54.006660  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:54.006818  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:54.006950  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:54.006990  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:54.007009  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:54.007198  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:54.007256  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:54.009481  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:54.009913  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:54.010246  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:54.011632  609255 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:22:54.011633  609255 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:22:54.011637  609255 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:22:54.013113  609255 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:22:54.013143  609255 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:22:54.013164  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:54.013195  609255 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:22:54.013213  609255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:22:54.013235  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:54.014564  609255 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:22:54.015747  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:22:54.015765  609255 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:22:54.015784  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:54.016942  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.017357  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.017663  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:54.017688  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.017827  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:54.017986  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:54.018015  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.018109  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:54.018280  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:54.018280  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:54.018430  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:54.018749  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:54.018789  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:54.018940  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:54.020512  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.020933  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:54.020964  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.021227  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:54.021403  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:54.021615  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:54.021770  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:54.022709  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I0127 14:22:54.023149  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:54.023660  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:54.023678  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:54.023945  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:54.024663  609255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:22:54.024716  609255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:54.042893  609255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0127 14:22:54.043266  609255 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:54.043755  609255 main.go:141] libmachine: Using API Version  1
	I0127 14:22:54.043778  609255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:54.044121  609255 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:54.044332  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:22:54.045817  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:22:54.046028  609255 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:22:54.046045  609255 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:22:54.046064  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:22:54.048797  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.049204  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:22:31 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:22:54.049246  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:22:54.049524  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:22:54.049702  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:22:54.049877  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:22:54.050026  609255 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:22:54.178698  609255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:22:54.201447  609255 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-178758" to be "Ready" ...
	I0127 14:22:54.290132  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:22:54.290163  609255 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:22:54.291526  609255 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:22:54.291550  609255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:22:54.297150  609255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:22:54.315823  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:22:54.315844  609255 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:22:54.330036  609255 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:22:54.330057  609255 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:22:54.353726  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:22:54.353751  609255 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:22:54.366665  609255 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:22:54.366687  609255 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:22:54.390706  609255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:22:54.407196  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:22:54.407214  609255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:22:54.432996  609255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:22:54.438534  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:22:54.438555  609255 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:22:54.499390  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:22:54.499426  609255 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:22:54.538959  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:22:54.538997  609255 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:22:54.570337  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:22:54.570365  609255 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:22:54.686318  609255 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:22:54.686344  609255 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:22:54.751081  609255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:22:54.755335  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:54.755369  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:54.755687  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:54.755730  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:54.755741  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:54.755759  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:54.755768  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:54.756049  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:54.756075  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:54.764979  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:54.765004  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:54.765251  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:54.765258  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:54.765274  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:55.684743  609255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.293999152s)
	I0127 14:22:55.684803  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:55.684823  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:55.685151  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:55.685211  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:55.685231  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:55.685247  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:55.685262  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:55.685495  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:55.685543  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:55.685574  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:55.737032  609255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303993299s)
	I0127 14:22:55.737091  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:55.737104  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:55.737432  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:55.737450  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:55.737474  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:55.737482  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:55.737434  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:55.737742  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:55.737763  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:55.737778  609255 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-178758"
	I0127 14:22:55.737785  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:56.117957  609255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.36681563s)
	I0127 14:22:56.118023  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:56.118049  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:56.118396  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:22:56.118467  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:56.118486  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:56.118507  609255 main.go:141] libmachine: Making call to close driver server
	I0127 14:22:56.118520  609255 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:22:56.118772  609255 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:22:56.118793  609255 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:22:56.120045  609255 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-178758 addons enable metrics-server
	
	I0127 14:22:56.121306  609255 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 14:22:56.122347  609255 addons.go:514] duration metric: took 2.164898686s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 14:22:56.204582  609255 node_ready.go:53] node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:22:58.705349  609255 node_ready.go:53] node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:23:00.706535  609255 node_ready.go:53] node "default-k8s-diff-port-178758" has status "Ready":"False"
	I0127 14:23:01.204994  609255 node_ready.go:49] node "default-k8s-diff-port-178758" has status "Ready":"True"
	I0127 14:23:01.205019  609255 node_ready.go:38] duration metric: took 7.003540419s for node "default-k8s-diff-port-178758" to be "Ready" ...
	I0127 14:23:01.205035  609255 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:23:01.210190  609255 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:01.214472  609255 pod_ready.go:93] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:01.214500  609255 pod_ready.go:82] duration metric: took 4.286136ms for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:01.214512  609255 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:03.221122  609255 pod_ready.go:103] pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:05.720287  609255 pod_ready.go:103] pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:07.221222  609255 pod_ready.go:93] pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:07.221255  609255 pod_ready.go:82] duration metric: took 6.006733601s for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.221268  609255 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.228103  609255 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:07.228133  609255 pod_ready.go:82] duration metric: took 6.855141ms for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.228148  609255 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.232775  609255 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:07.232798  609255 pod_ready.go:82] duration metric: took 4.641692ms for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.232807  609255 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.237141  609255 pod_ready.go:93] pod "kube-proxy-h9dzd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:07.237162  609255 pod_ready.go:82] duration metric: took 4.347647ms for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.237173  609255 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.246649  609255 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:23:07.246668  609255 pod_ready.go:82] duration metric: took 9.487744ms for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:07.246680  609255 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace to be "Ready" ...
	I0127 14:23:09.252788  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:11.752586  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:13.752683  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:15.752979  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:17.753968  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:20.252445  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:22.252948  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:24.254423  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:26.753214  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:28.753495  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:31.252492  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:33.253190  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:35.754328  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:38.255399  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:40.753526  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:43.253410  609255 pod_ready.go:103] pod "metrics-server-f79f97bbb-vwkjg" in "kube-system" namespace has status "Ready":"False"
	I0127 14:23:46.492081  603347 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000309562s
	I0127 14:23:46.492128  603347 kubeadm.go:310] 
	I0127 14:23:46.492190  603347 kubeadm.go:310] Unfortunately, an error has occurred:
	I0127 14:23:46.492232  603347 kubeadm.go:310] 	context deadline exceeded
	I0127 14:23:46.492243  603347 kubeadm.go:310] 
	I0127 14:23:46.492293  603347 kubeadm.go:310] This error is likely caused by:
	I0127 14:23:46.492341  603347 kubeadm.go:310] 	- The kubelet is not running
	I0127 14:23:46.492489  603347 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:23:46.492502  603347 kubeadm.go:310] 
	I0127 14:23:46.492683  603347 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:23:46.492755  603347 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0127 14:23:46.492810  603347 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0127 14:23:46.492819  603347 kubeadm.go:310] 
	I0127 14:23:46.492958  603347 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:23:46.493079  603347 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:23:46.493203  603347 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0127 14:23:46.493358  603347 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:23:46.493477  603347 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0127 14:23:46.493617  603347 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:23:46.495178  603347 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:23:46.495293  603347 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0127 14:23:46.495407  603347 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:23:46.495579  603347 kubeadm.go:394] duration metric: took 12m14.528559651s to StartCluster
	I0127 14:23:46.495646  603347 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:23:46.495737  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:23:46.546524  603347 cri.go:89] found id: "3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a"
	I0127 14:23:46.546551  603347 cri.go:89] found id: ""
	I0127 14:23:46.546561  603347 logs.go:282] 1 containers: [3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a]
	I0127 14:23:46.546632  603347 ssh_runner.go:195] Run: which crictl
	I0127 14:23:46.551289  603347 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:23:46.551383  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:23:46.591832  603347 cri.go:89] found id: ""
	I0127 14:23:46.591868  603347 logs.go:282] 0 containers: []
	W0127 14:23:46.591882  603347 logs.go:284] No container was found matching "etcd"
	I0127 14:23:46.591891  603347 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:23:46.591962  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:23:46.630283  603347 cri.go:89] found id: ""
	I0127 14:23:46.630316  603347 logs.go:282] 0 containers: []
	W0127 14:23:46.630327  603347 logs.go:284] No container was found matching "coredns"
	I0127 14:23:46.630336  603347 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:23:46.630403  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:23:46.668710  603347 cri.go:89] found id: "347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98"
	I0127 14:23:46.668734  603347 cri.go:89] found id: ""
	I0127 14:23:46.668742  603347 logs.go:282] 1 containers: [347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98]
	I0127 14:23:46.668807  603347 ssh_runner.go:195] Run: which crictl
	I0127 14:23:46.672898  603347 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:23:46.672976  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:23:46.709394  603347 cri.go:89] found id: ""
	I0127 14:23:46.709420  603347 logs.go:282] 0 containers: []
	W0127 14:23:46.709430  603347 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:23:46.709439  603347 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:23:46.709497  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:23:46.748218  603347 cri.go:89] found id: "9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c"
	I0127 14:23:46.748245  603347 cri.go:89] found id: ""
	I0127 14:23:46.748255  603347 logs.go:282] 1 containers: [9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c]
	I0127 14:23:46.748316  603347 ssh_runner.go:195] Run: which crictl
	I0127 14:23:46.753378  603347 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:23:46.753455  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:23:46.788670  603347 cri.go:89] found id: ""
	I0127 14:23:46.788700  603347 logs.go:282] 0 containers: []
	W0127 14:23:46.788710  603347 logs.go:284] No container was found matching "kindnet"
	I0127 14:23:46.788719  603347 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0127 14:23:46.788775  603347 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 14:23:46.821114  603347 cri.go:89] found id: ""
	I0127 14:23:46.821145  603347 logs.go:282] 0 containers: []
	W0127 14:23:46.821155  603347 logs.go:284] No container was found matching "storage-provisioner"
	I0127 14:23:46.821166  603347 logs.go:123] Gathering logs for kubelet ...
	I0127 14:23:46.821178  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:23:46.955012  603347 logs.go:123] Gathering logs for dmesg ...
	I0127 14:23:46.955049  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:23:46.969437  603347 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:23:46.969463  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:23:47.045691  603347 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:23:47.045715  603347 logs.go:123] Gathering logs for kube-apiserver [3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a] ...
	I0127 14:23:47.045729  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a"
	I0127 14:23:47.086750  603347 logs.go:123] Gathering logs for kube-scheduler [347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98] ...
	I0127 14:23:47.086787  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98"
	I0127 14:23:47.170804  603347 logs.go:123] Gathering logs for kube-controller-manager [9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c] ...
	I0127 14:23:47.170844  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c"
	I0127 14:23:47.207214  603347 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:23:47.207250  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:23:47.429108  603347 logs.go:123] Gathering logs for container status ...
	I0127 14:23:47.429146  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 14:23:47.483310  603347 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.004096525s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000309562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 14:23:47.483363  603347 out.go:270] * 
	W0127 14:23:47.483431  603347 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.004096525s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000309562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:23:47.483446  603347 out.go:270] * 
	W0127 14:23:47.484431  603347 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 14:23:47.487539  603347 out.go:201] 
	W0127 14:23:47.488633  603347 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.004096525s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000309562s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:23:47.488681  603347 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 14:23:47.488722  603347 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 14:23:47.489922  603347 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.128256299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987828128229036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0701610b-c0cc-4911-82c5-900fb650ce6f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.128711236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d12d8677-0fec-4d4e-88c0-cc63da24e38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.128780337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d12d8677-0fec-4d4e-88c0-cc63da24e38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.128864412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c,PodSandboxId:2b85709bfdc5d0bfac69dd3c06cc674396918ff74a7e69f834da79d015e14081,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987771029521125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eae7e92c712315082541fddff56525a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.co
ntainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a,PodSandboxId:a48afdfe17d73f009c9391176fa34fc354ecfdb40cbac1b0db54f37495887c4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987758034259319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc89afb3e1bcdb93c2a406a7e6123a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.contai
ner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98,PodSandboxId:5d04aa76c5487492d20c8e5307b9a906df544a330290814f49fff16cc36f45f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987586572828987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae40151d668eb7e168da957595a64007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.
restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d12d8677-0fec-4d4e-88c0-cc63da24e38f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.158392884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a3255e0-6218-466f-9d7a-d0ed9590d9e7 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.158473687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a3255e0-6218-466f-9d7a-d0ed9590d9e7 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.168114390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=badf0b3e-e9d0-4640-851d-16710e3cc728 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.168543575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987828168516811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=badf0b3e-e9d0-4640-851d-16710e3cc728 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.169109485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7767182b-b34a-46f2-a57e-524569249276 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.169260216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7767182b-b34a-46f2-a57e-524569249276 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.169377983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c,PodSandboxId:2b85709bfdc5d0bfac69dd3c06cc674396918ff74a7e69f834da79d015e14081,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987771029521125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eae7e92c712315082541fddff56525a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.co
ntainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a,PodSandboxId:a48afdfe17d73f009c9391176fa34fc354ecfdb40cbac1b0db54f37495887c4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987758034259319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc89afb3e1bcdb93c2a406a7e6123a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.contai
ner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98,PodSandboxId:5d04aa76c5487492d20c8e5307b9a906df544a330290814f49fff16cc36f45f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987586572828987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae40151d668eb7e168da957595a64007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.
restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7767182b-b34a-46f2-a57e-524569249276 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.207575501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4219d28-14c8-41d8-8196-18bbc54c3f5c name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.207640955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4219d28-14c8-41d8-8196-18bbc54c3f5c name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.208764251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1c79c39-b05f-4eee-ad26-4e7b32c101ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.209247402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987828209109933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1c79c39-b05f-4eee-ad26-4e7b32c101ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.209852185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0ec6620-7925-4411-b4d0-3b1a09ea5717 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.209924068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0ec6620-7925-4411-b4d0-3b1a09ea5717 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.210019354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c,PodSandboxId:2b85709bfdc5d0bfac69dd3c06cc674396918ff74a7e69f834da79d015e14081,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987771029521125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eae7e92c712315082541fddff56525a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.co
ntainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a,PodSandboxId:a48afdfe17d73f009c9391176fa34fc354ecfdb40cbac1b0db54f37495887c4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987758034259319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc89afb3e1bcdb93c2a406a7e6123a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.contai
ner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98,PodSandboxId:5d04aa76c5487492d20c8e5307b9a906df544a330290814f49fff16cc36f45f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987586572828987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae40151d668eb7e168da957595a64007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.
restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0ec6620-7925-4411-b4d0-3b1a09ea5717 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.242253461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a93ae410-84df-4db6-924f-606675c0a3a7 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.242336217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a93ae410-84df-4db6-924f-606675c0a3a7 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.243434337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68950342-eae1-4177-82aa-091a65c79853 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.243783668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987828243762691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68950342-eae1-4177-82aa-091a65c79853 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.244407849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7992be7-c677-44e9-946e-833525d6b8d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.244473783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7992be7-c677-44e9-946e-833525d6b8d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:23:48 kubernetes-upgrade-225004 crio[2820]: time="2025-01-27 14:23:48.244563382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c,PodSandboxId:2b85709bfdc5d0bfac69dd3c06cc674396918ff74a7e69f834da79d015e14081,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987771029521125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eae7e92c712315082541fddff56525a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.co
ntainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a,PodSandboxId:a48afdfe17d73f009c9391176fa34fc354ecfdb40cbac1b0db54f37495887c4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987758034259319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc89afb3e1bcdb93c2a406a7e6123a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.contai
ner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98,PodSandboxId:5d04aa76c5487492d20c8e5307b9a906df544a330290814f49fff16cc36f45f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987586572828987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-225004,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae40151d668eb7e168da957595a64007,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.
restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7992be7-c677-44e9-946e-833525d6b8d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9f522e16624c2       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   57 seconds ago       Exited              kube-controller-manager   15                  2b85709bfdc5d       kube-controller-manager-kubernetes-upgrade-225004
	3f4936aae73f2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   About a minute ago   Exited              kube-apiserver            15                  a48afdfe17d73       kube-apiserver-kubernetes-upgrade-225004
	347ca7e0e9e15       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   4 minutes ago        Running             kube-scheduler            4                   5d04aa76c5487       kube-scheduler-kubernetes-upgrade-225004
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.056057] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070588] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.168795] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.169017] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.302082] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +4.106712] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +2.134210] systemd-fstab-generator[844]: Ignoring "noauto" option for root device
	[  +0.057011] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.989077] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	[  +0.100675] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.035392] systemd-fstab-generator[2446]: Ignoring "noauto" option for root device
	[  +0.292971] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +0.242635] systemd-fstab-generator[2579]: Ignoring "noauto" option for root device
	[  +0.203618] systemd-fstab-generator[2594]: Ignoring "noauto" option for root device
	[  +0.453370] systemd-fstab-generator[2672]: Ignoring "noauto" option for root device
	[  +0.105434] kauditd_printk_skb: 234 callbacks suppressed
	[Jan27 14:11] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +0.123151] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.646725] systemd-fstab-generator[3463]: Ignoring "noauto" option for root device
	[ +22.861138] kauditd_printk_skb: 111 callbacks suppressed
	[Jan27 14:15] systemd-fstab-generator[8943]: Ignoring "noauto" option for root device
	[  +1.111493] kauditd_printk_skb: 43 callbacks suppressed
	[Jan27 14:16] kauditd_printk_skb: 25 callbacks suppressed
	[Jan27 14:19] systemd-fstab-generator[9927]: Ignoring "noauto" option for root device
	[Jan27 14:20] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 14:23:48 up 14 min,  0 users,  load average: 0.01, 0.13, 0.15
	Linux kubernetes-upgrade-225004 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a] <==
	I0127 14:22:38.218229       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0127 14:22:38.499939       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:38.501067       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 14:22:38.508653       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 14:22:38.517746       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 14:22:38.525556       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 14:22:38.525603       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 14:22:38.525832       1 instance.go:233] Using reconciler: lease
	W0127 14:22:38.526761       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:39.500734       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:39.502212       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:39.527396       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:40.795991       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:40.885681       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:41.132730       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:43.073238       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:43.196541       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:43.352080       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:46.649275       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:47.200603       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:47.256699       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:53.264639       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:54.083983       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:22:54.512849       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0127 14:22:58.527428       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c] <==
	I0127 14:22:51.366172       1 serving.go:386] Generated self-signed cert in-memory
	I0127 14:22:52.094479       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 14:22:52.094523       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:52.098039       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 14:22:52.098278       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 14:22:52.098343       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 14:22:52.098540       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0127 14:23:09.535342       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.145:8443/healthz\": dial tcp 192.168.83.145:8443: connect: connection refused"
	
	
	==> kube-scheduler [347ca7e0e9e15eb6f7053ed6db45974d59294d3aa99b601f6598af9f35507e98] <==
	E0127 14:23:27.843178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.83.145:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:29.428823       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.83.145:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:29.428912       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.83.145:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:29.588988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.83.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:29.589119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.83.145:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:31.976548       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: Get "https://192.168.83.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:31.976611       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.83.145:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:32.767985       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.83.145:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:32.768047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.83.145:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:41.157381       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.83.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:41.157446       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.83.145:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:43.914619       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.83.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:43.914702       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.83.145:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:43.982512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.83.145:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:43.982565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.83.145:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:46.194927       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.83.145:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:46.195001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.83.145:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:46.533303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.83.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:46.533366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.145:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:46.541256       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.83.145:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:46.541334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.83.145:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:47.916988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.83.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:47.917069       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.83.145:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	W0127 14:23:48.451593       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.83.145:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.83.145:8443: connect: connection refused
	E0127 14:23:48.451666       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.83.145:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.83.145:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Jan 27 14:23:36 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:36.095534    9934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987816094607481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:23:40 kubernetes-upgrade-225004 kubelet[9934]: I0127 14:23:40.549655    9934 kubelet_node_status.go:76] "Attempting to register node" node="kubernetes-upgrade-225004"
	Jan 27 14:23:40 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:40.550924    9934 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.145:8443: connect: connection refused" node="kubernetes-upgrade-225004"
	Jan 27 14:23:41 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:41.548937    9934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-225004?timeout=10s\": dial tcp 192.168.83.145:8443: connect: connection refused" interval="7s"
	Jan 27 14:23:42 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:42.014380    9934 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-225004\" not found" node="kubernetes-upgrade-225004"
	Jan 27 14:23:42 kubernetes-upgrade-225004 kubelet[9934]: I0127 14:23:42.014709    9934 scope.go:117] "RemoveContainer" containerID="3f4936aae73f22ae3e81a856bd775d8d7c40e4651b7956ff99b9d37015f67b1a"
	Jan 27 14:23:42 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:42.014856    9934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-225004_kube-system(b8fc89afb3e1bcdb93c2a406a7e6123a)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-225004" podUID="b8fc89afb3e1bcdb93c2a406a7e6123a"
	Jan 27 14:23:43 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:43.013803    9934 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-225004\" not found" node="kubernetes-upgrade-225004"
	Jan 27 14:23:43 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:43.021609    9934 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-225004_kube-system_f95470467052802fecdfa30efb8b29d1_1\" is already in use by bcacc7ae374404d53912c95883d7dc25dcf6111b51fed50d8e105e7271634275. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="ebee9ac78c4d9c31b0dbc04039596abd8cf9ed5aa006b74267817781b620a13b"
	Jan 27 14:23:43 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:43.021985    9934 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.16-0,Command:[etcd --advertise-client-urls=https://192.168.83.145:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.83.145:2380 --initial-cluster=kubernetes-upgrade-225004=https://192.168.83.145:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.83.145:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.83.145:2380 --name=kubernetes-upgrade-225004 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var
/lib/minikube/certs/etcd/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:n
il,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-kubernetes-upgrade-225004_kube-system(f95470467052802fecdfa30efb
8b29d1): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-225004_kube-system_f95470467052802fecdfa30efb8b29d1_1\" is already in use by bcacc7ae374404d53912c95883d7dc25dcf6111b51fed50d8e105e7271634275. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jan 27 14:23:43 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:43.023370    9934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-225004_kube-system_f95470467052802fecdfa30efb8b29d1_1\\\" is already in use by bcacc7ae374404d53912c95883d7dc25dcf6111b51fed50d8e105e7271634275. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-225004" podUID="f95470467052802fecdfa30efb8b29d1"
	Jan 27 14:23:44 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:44.014463    9934 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-225004\" not found" node="kubernetes-upgrade-225004"
	Jan 27 14:23:44 kubernetes-upgrade-225004 kubelet[9934]: I0127 14:23:44.014552    9934 scope.go:117] "RemoveContainer" containerID="9f522e16624c29ec66b12d5d2ce0ebab03227a6811b538ec3d54d8dfe1ae9f4c"
	Jan 27 14:23:44 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:44.014666    9934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-225004_kube-system(5eae7e92c712315082541fddff56525a)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-225004" podUID="5eae7e92c712315082541fddff56525a"
	Jan 27 14:23:44 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:44.299121    9934 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.83.145:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-225004.181e9280517b8c54  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-225004,UID:kubernetes-upgrade-225004,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-225004 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-225004,},FirstTimestamp:2025-01-27 14:19:46.032110676 +0000 UTC m=+0.580744638,LastTimestamp:2025-01-27 14:19:46.032110676 +0000 UTC m=+0.580744638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,
ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-225004,}"
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:46.029955    9934 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:46.097766    9934 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987826097435478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:23:46 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:46.097809    9934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987826097435478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:23:47 kubernetes-upgrade-225004 kubelet[9934]: I0127 14:23:47.552888    9934 kubelet_node_status.go:76] "Attempting to register node" node="kubernetes-upgrade-225004"
	Jan 27 14:23:47 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:47.553954    9934 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.145:8443: connect: connection refused" node="kubernetes-upgrade-225004"
	Jan 27 14:23:48 kubernetes-upgrade-225004 kubelet[9934]: E0127 14:23:48.550457    9934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-225004?timeout=10s\": dial tcp 192.168.83.145:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-225004 -n kubernetes-upgrade-225004
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-225004 -n kubernetes-upgrade-225004: exit status 2 (245.533982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-225004" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-225004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-225004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-225004: (1.001132479s)
--- FAIL: TestKubernetesUpgrade (1145.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.56136829s)

                                                
                                                
-- stdout --
	* [old-k8s-version-456130] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-456130" primary control-plane node in "old-k8s-version-456130" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:07:08.366778  601373 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:07:08.366884  601373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:08.366892  601373 out.go:358] Setting ErrFile to fd 2...
	I0127 14:07:08.366897  601373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:08.367066  601373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:07:08.367619  601373 out.go:352] Setting JSON to false
	I0127 14:07:08.368564  601373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17373,"bootTime":1737969455,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:07:08.368664  601373 start.go:139] virtualization: kvm guest
	I0127 14:07:08.370601  601373 out.go:177] * [old-k8s-version-456130] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:07:08.371819  601373 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:07:08.371835  601373 notify.go:220] Checking for updates...
	I0127 14:07:08.373936  601373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:07:08.374975  601373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:07:08.376042  601373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:08.377161  601373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:07:08.378332  601373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:07:08.379763  601373 config.go:182] Loaded profile config "cert-expiration-335486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:08.379849  601373 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:08.379934  601373 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:08.380012  601373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:07:08.413721  601373 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:07:08.414722  601373 start.go:297] selected driver: kvm2
	I0127 14:07:08.414739  601373 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:07:08.414753  601373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:07:08.415804  601373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:08.415883  601373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:07:08.430334  601373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:07:08.430382  601373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:07:08.430632  601373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:08.430667  601373 cni.go:84] Creating CNI manager for ""
	I0127 14:07:08.430711  601373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:08.430724  601373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:07:08.430769  601373 start.go:340] cluster config:
	{Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:08.430868  601373 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:08.432318  601373 out.go:177] * Starting "old-k8s-version-456130" primary control-plane node in "old-k8s-version-456130" cluster
	I0127 14:07:08.433435  601373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:07:08.433467  601373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 14:07:08.433474  601373 cache.go:56] Caching tarball of preloaded images
	I0127 14:07:08.433575  601373 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:07:08.433604  601373 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 14:07:08.433692  601373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:07:08.433714  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json: {Name:mk7e37e7227c826f68b97ca62c90173b8ea88022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:08.433874  601373 start.go:360] acquireMachinesLock for old-k8s-version-456130: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:07:14.245971  601373 start.go:364] duration metric: took 5.812042353s to acquireMachinesLock for "old-k8s-version-456130"
	I0127 14:07:14.246038  601373 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:07:14.246168  601373 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:07:14.247788  601373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 14:07:14.247969  601373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:14.248025  601373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:14.264869  601373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0127 14:07:14.265274  601373 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:14.265832  601373 main.go:141] libmachine: Using API Version  1
	I0127 14:07:14.265856  601373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:14.266195  601373 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:14.266413  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:14.266564  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:14.266728  601373 start.go:159] libmachine.API.Create for "old-k8s-version-456130" (driver="kvm2")
	I0127 14:07:14.266758  601373 client.go:168] LocalClient.Create starting
	I0127 14:07:14.266793  601373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:07:14.266851  601373 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:14.266874  601373 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:14.266948  601373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:07:14.266976  601373 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:14.266995  601373 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:14.267024  601373 main.go:141] libmachine: Running pre-create checks...
	I0127 14:07:14.267037  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .PreCreateCheck
	I0127 14:07:14.267370  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:07:14.267852  601373 main.go:141] libmachine: Creating machine...
	I0127 14:07:14.267871  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .Create
	I0127 14:07:14.268010  601373 main.go:141] libmachine: (old-k8s-version-456130) creating KVM machine...
	I0127 14:07:14.268033  601373 main.go:141] libmachine: (old-k8s-version-456130) creating network...
	I0127 14:07:14.269232  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found existing default KVM network
	I0127 14:07:14.270830  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:14.270677  601428 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000264180}
	I0127 14:07:14.270859  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | created network xml: 
	I0127 14:07:14.270871  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | <network>
	I0127 14:07:14.270879  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   <name>mk-old-k8s-version-456130</name>
	I0127 14:07:14.270915  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   <dns enable='no'/>
	I0127 14:07:14.270940  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   
	I0127 14:07:14.270979  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 14:07:14.270999  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |     <dhcp>
	I0127 14:07:14.271006  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 14:07:14.271014  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |     </dhcp>
	I0127 14:07:14.271036  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   </ip>
	I0127 14:07:14.271048  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG |   
	I0127 14:07:14.271061  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | </network>
	I0127 14:07:14.271071  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | 
	I0127 14:07:14.275782  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | trying to create private KVM network mk-old-k8s-version-456130 192.168.39.0/24...
	I0127 14:07:14.344710  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | private KVM network mk-old-k8s-version-456130 192.168.39.0/24 created
	I0127 14:07:14.344763  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:14.344677  601428 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:14.344826  601373 main.go:141] libmachine: (old-k8s-version-456130) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130 ...
	I0127 14:07:14.344854  601373 main.go:141] libmachine: (old-k8s-version-456130) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:07:14.344929  601373 main.go:141] libmachine: (old-k8s-version-456130) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:07:14.639945  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:14.639762  601428 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa...
	I0127 14:07:14.728358  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:14.728257  601428 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/old-k8s-version-456130.rawdisk...
	I0127 14:07:14.728394  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | Writing magic tar header
	I0127 14:07:14.728416  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | Writing SSH key tar header
	I0127 14:07:14.728436  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:14.728373  601428 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130 ...
	I0127 14:07:14.728507  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130
	I0127 14:07:14.728538  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:07:14.728556  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130 (perms=drwx------)
	I0127 14:07:14.728571  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:14.728592  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:07:14.728623  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:07:14.728641  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:07:14.728653  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:07:14.728667  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home/jenkins
	I0127 14:07:14.728683  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | checking permissions on dir: /home
	I0127 14:07:14.728698  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | skipping /home - not owner
	I0127 14:07:14.728720  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:07:14.728736  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:07:14.728754  601373 main.go:141] libmachine: (old-k8s-version-456130) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:07:14.728768  601373 main.go:141] libmachine: (old-k8s-version-456130) creating domain...
	I0127 14:07:14.729849  601373 main.go:141] libmachine: (old-k8s-version-456130) define libvirt domain using xml: 
	I0127 14:07:14.729873  601373 main.go:141] libmachine: (old-k8s-version-456130) <domain type='kvm'>
	I0127 14:07:14.729883  601373 main.go:141] libmachine: (old-k8s-version-456130)   <name>old-k8s-version-456130</name>
	I0127 14:07:14.729891  601373 main.go:141] libmachine: (old-k8s-version-456130)   <memory unit='MiB'>2200</memory>
	I0127 14:07:14.729900  601373 main.go:141] libmachine: (old-k8s-version-456130)   <vcpu>2</vcpu>
	I0127 14:07:14.729910  601373 main.go:141] libmachine: (old-k8s-version-456130)   <features>
	I0127 14:07:14.729919  601373 main.go:141] libmachine: (old-k8s-version-456130)     <acpi/>
	I0127 14:07:14.729929  601373 main.go:141] libmachine: (old-k8s-version-456130)     <apic/>
	I0127 14:07:14.729945  601373 main.go:141] libmachine: (old-k8s-version-456130)     <pae/>
	I0127 14:07:14.729957  601373 main.go:141] libmachine: (old-k8s-version-456130)     
	I0127 14:07:14.729966  601373 main.go:141] libmachine: (old-k8s-version-456130)   </features>
	I0127 14:07:14.729982  601373 main.go:141] libmachine: (old-k8s-version-456130)   <cpu mode='host-passthrough'>
	I0127 14:07:14.729992  601373 main.go:141] libmachine: (old-k8s-version-456130)   
	I0127 14:07:14.729998  601373 main.go:141] libmachine: (old-k8s-version-456130)   </cpu>
	I0127 14:07:14.730008  601373 main.go:141] libmachine: (old-k8s-version-456130)   <os>
	I0127 14:07:14.730019  601373 main.go:141] libmachine: (old-k8s-version-456130)     <type>hvm</type>
	I0127 14:07:14.730027  601373 main.go:141] libmachine: (old-k8s-version-456130)     <boot dev='cdrom'/>
	I0127 14:07:14.730035  601373 main.go:141] libmachine: (old-k8s-version-456130)     <boot dev='hd'/>
	I0127 14:07:14.730044  601373 main.go:141] libmachine: (old-k8s-version-456130)     <bootmenu enable='no'/>
	I0127 14:07:14.730053  601373 main.go:141] libmachine: (old-k8s-version-456130)   </os>
	I0127 14:07:14.730085  601373 main.go:141] libmachine: (old-k8s-version-456130)   <devices>
	I0127 14:07:14.730109  601373 main.go:141] libmachine: (old-k8s-version-456130)     <disk type='file' device='cdrom'>
	I0127 14:07:14.730130  601373 main.go:141] libmachine: (old-k8s-version-456130)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/boot2docker.iso'/>
	I0127 14:07:14.730142  601373 main.go:141] libmachine: (old-k8s-version-456130)       <target dev='hdc' bus='scsi'/>
	I0127 14:07:14.730160  601373 main.go:141] libmachine: (old-k8s-version-456130)       <readonly/>
	I0127 14:07:14.730171  601373 main.go:141] libmachine: (old-k8s-version-456130)     </disk>
	I0127 14:07:14.730270  601373 main.go:141] libmachine: (old-k8s-version-456130)     <disk type='file' device='disk'>
	I0127 14:07:14.730335  601373 main.go:141] libmachine: (old-k8s-version-456130)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:07:14.730365  601373 main.go:141] libmachine: (old-k8s-version-456130)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/old-k8s-version-456130.rawdisk'/>
	I0127 14:07:14.730381  601373 main.go:141] libmachine: (old-k8s-version-456130)       <target dev='hda' bus='virtio'/>
	I0127 14:07:14.730406  601373 main.go:141] libmachine: (old-k8s-version-456130)     </disk>
	I0127 14:07:14.730413  601373 main.go:141] libmachine: (old-k8s-version-456130)     <interface type='network'>
	I0127 14:07:14.730419  601373 main.go:141] libmachine: (old-k8s-version-456130)       <source network='mk-old-k8s-version-456130'/>
	I0127 14:07:14.730426  601373 main.go:141] libmachine: (old-k8s-version-456130)       <model type='virtio'/>
	I0127 14:07:14.730431  601373 main.go:141] libmachine: (old-k8s-version-456130)     </interface>
	I0127 14:07:14.730435  601373 main.go:141] libmachine: (old-k8s-version-456130)     <interface type='network'>
	I0127 14:07:14.730443  601373 main.go:141] libmachine: (old-k8s-version-456130)       <source network='default'/>
	I0127 14:07:14.730447  601373 main.go:141] libmachine: (old-k8s-version-456130)       <model type='virtio'/>
	I0127 14:07:14.730457  601373 main.go:141] libmachine: (old-k8s-version-456130)     </interface>
	I0127 14:07:14.730461  601373 main.go:141] libmachine: (old-k8s-version-456130)     <serial type='pty'>
	I0127 14:07:14.730467  601373 main.go:141] libmachine: (old-k8s-version-456130)       <target port='0'/>
	I0127 14:07:14.730471  601373 main.go:141] libmachine: (old-k8s-version-456130)     </serial>
	I0127 14:07:14.730475  601373 main.go:141] libmachine: (old-k8s-version-456130)     <console type='pty'>
	I0127 14:07:14.730482  601373 main.go:141] libmachine: (old-k8s-version-456130)       <target type='serial' port='0'/>
	I0127 14:07:14.730486  601373 main.go:141] libmachine: (old-k8s-version-456130)     </console>
	I0127 14:07:14.730491  601373 main.go:141] libmachine: (old-k8s-version-456130)     <rng model='virtio'>
	I0127 14:07:14.730497  601373 main.go:141] libmachine: (old-k8s-version-456130)       <backend model='random'>/dev/random</backend>
	I0127 14:07:14.730503  601373 main.go:141] libmachine: (old-k8s-version-456130)     </rng>
	I0127 14:07:14.730508  601373 main.go:141] libmachine: (old-k8s-version-456130)     
	I0127 14:07:14.730514  601373 main.go:141] libmachine: (old-k8s-version-456130)     
	I0127 14:07:14.730519  601373 main.go:141] libmachine: (old-k8s-version-456130)   </devices>
	I0127 14:07:14.730523  601373 main.go:141] libmachine: (old-k8s-version-456130) </domain>
	I0127 14:07:14.730543  601373 main.go:141] libmachine: (old-k8s-version-456130) 
	I0127 14:07:14.734700  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:95:18:c7 in network default
	I0127 14:07:14.735324  601373 main.go:141] libmachine: (old-k8s-version-456130) starting domain...
	I0127 14:07:14.735345  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:14.735354  601373 main.go:141] libmachine: (old-k8s-version-456130) ensuring networks are active...
	I0127 14:07:14.736078  601373 main.go:141] libmachine: (old-k8s-version-456130) Ensuring network default is active
	I0127 14:07:14.736546  601373 main.go:141] libmachine: (old-k8s-version-456130) Ensuring network mk-old-k8s-version-456130 is active
	I0127 14:07:14.737229  601373 main.go:141] libmachine: (old-k8s-version-456130) getting domain XML...
	I0127 14:07:14.738114  601373 main.go:141] libmachine: (old-k8s-version-456130) creating domain...
	I0127 14:07:15.087513  601373 main.go:141] libmachine: (old-k8s-version-456130) waiting for IP...
	I0127 14:07:15.088394  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:15.088892  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:15.088956  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:15.088862  601428 retry.go:31] will retry after 201.21015ms: waiting for domain to come up
	I0127 14:07:15.291315  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:15.291898  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:15.291928  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:15.291865  601428 retry.go:31] will retry after 364.577717ms: waiting for domain to come up
	I0127 14:07:15.658669  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:15.659190  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:15.659228  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:15.659157  601428 retry.go:31] will retry after 404.191369ms: waiting for domain to come up
	I0127 14:07:16.064871  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:16.065379  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:16.065404  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:16.065360  601428 retry.go:31] will retry after 381.976708ms: waiting for domain to come up
	I0127 14:07:16.449201  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:16.449936  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:16.449972  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:16.449880  601428 retry.go:31] will retry after 721.819339ms: waiting for domain to come up
	I0127 14:07:17.173987  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:17.174609  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:17.174767  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:17.174677  601428 retry.go:31] will retry after 772.050716ms: waiting for domain to come up
	I0127 14:07:17.948225  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:17.948829  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:17.948864  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:17.948796  601428 retry.go:31] will retry after 1.177964657s: waiting for domain to come up
	I0127 14:07:19.128849  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:19.129408  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:19.129462  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:19.129393  601428 retry.go:31] will retry after 1.350853517s: waiting for domain to come up
	I0127 14:07:20.481419  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:20.482014  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:20.482047  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:20.481986  601428 retry.go:31] will retry after 1.638043789s: waiting for domain to come up
	I0127 14:07:22.122901  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:22.123420  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:22.123453  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:22.123394  601428 retry.go:31] will retry after 1.931533143s: waiting for domain to come up
	I0127 14:07:24.056550  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:24.057018  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:24.057055  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:24.056972  601428 retry.go:31] will retry after 2.631367173s: waiting for domain to come up
	I0127 14:07:26.690994  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:26.691454  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:26.691483  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:26.691426  601428 retry.go:31] will retry after 2.87215572s: waiting for domain to come up
	I0127 14:07:29.565695  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:29.566562  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:29.566623  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:29.566524  601428 retry.go:31] will retry after 3.604112007s: waiting for domain to come up
	I0127 14:07:33.172403  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:33.172849  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:07:33.172876  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:07:33.172812  601428 retry.go:31] will retry after 4.68333271s: waiting for domain to come up
	I0127 14:07:37.858679  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:37.859268  601373 main.go:141] libmachine: (old-k8s-version-456130) found domain IP: 192.168.39.11
	I0127 14:07:37.859297  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has current primary IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:37.859306  601373 main.go:141] libmachine: (old-k8s-version-456130) reserving static IP address...
	I0127 14:07:37.859773  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-456130", mac: "52:54:00:7a:98:59", ip: "192.168.39.11"} in network mk-old-k8s-version-456130
	I0127 14:07:37.936631  601373 main.go:141] libmachine: (old-k8s-version-456130) reserved static IP address 192.168.39.11 for domain old-k8s-version-456130
	I0127 14:07:37.936653  601373 main.go:141] libmachine: (old-k8s-version-456130) waiting for SSH...
	I0127 14:07:37.936672  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | Getting to WaitForSSH function...
	I0127 14:07:37.939856  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:37.940315  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:37.940348  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:37.940513  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | Using SSH client type: external
	I0127 14:07:37.940543  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa (-rw-------)
	I0127 14:07:37.940587  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:07:37.940600  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | About to run SSH command:
	I0127 14:07:37.940617  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | exit 0
	I0127 14:07:38.073351  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | SSH cmd err, output: <nil>: 
	I0127 14:07:38.073659  601373 main.go:141] libmachine: (old-k8s-version-456130) KVM machine creation complete
	I0127 14:07:38.074038  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:07:38.074586  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:38.074794  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:38.074968  601373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:07:38.074986  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetState
	I0127 14:07:38.076416  601373 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:07:38.076432  601373 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:07:38.076437  601373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:07:38.076443  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.078903  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.079276  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.079301  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.079448  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.079611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.079767  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.079869  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.080054  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.080290  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.080303  601373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:07:38.192900  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:38.192925  601373 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:07:38.192936  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.196189  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.196679  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.196715  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.196922  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.197138  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.197349  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.197518  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.197752  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.197988  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.198002  601373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:07:38.314425  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:07:38.314531  601373 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:07:38.314544  601373 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:07:38.314556  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:38.314848  601373 buildroot.go:166] provisioning hostname "old-k8s-version-456130"
	I0127 14:07:38.314881  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:38.315107  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.318691  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.319182  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.319216  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.319400  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.319640  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.319829  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.320042  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.320258  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.320476  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.320491  601373 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-456130 && echo "old-k8s-version-456130" | sudo tee /etc/hostname
	I0127 14:07:38.454864  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-456130
	
	I0127 14:07:38.454932  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.457742  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458173  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.458207  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458350  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.458587  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458762  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458927  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.459102  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.459311  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.459349  601373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-456130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-456130/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-456130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:38.585645  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:38.585683  601373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:38.585741  601373 buildroot.go:174] setting up certificates
	I0127 14:07:38.585755  601373 provision.go:84] configureAuth start
	I0127 14:07:38.585772  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:38.586102  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:38.589345  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.589823  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589991  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.592421  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.592828  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.592860  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.593007  601373 provision.go:143] copyHostCerts
	I0127 14:07:38.593064  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:38.593091  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:38.593170  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:38.593347  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:38.593362  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:38.593392  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:38.593472  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:38.593481  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:38.593503  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:38.593570  601373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-456130 san=[127.0.0.1 192.168.39.11 localhost minikube old-k8s-version-456130]
	I0127 14:07:38.768898  601373 provision.go:177] copyRemoteCerts
	I0127 14:07:38.768964  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:38.768999  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.771730  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772083  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.772124  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772282  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.772477  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.772635  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.772784  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:38.859870  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:38.885052  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 14:07:38.911635  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:07:38.935458  601373 provision.go:87] duration metric: took 349.687848ms to configureAuth
	I0127 14:07:38.935490  601373 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:38.935724  601373 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:38.935827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.939100  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939413  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.939445  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939604  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.939827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940036  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940197  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.940380  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.940629  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.940652  601373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:39.198836  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:39.198866  601373 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:07:39.198874  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetURL
	I0127 14:07:39.200067  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | using libvirt version 6000000
	I0127 14:07:39.203833  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.204766  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.204793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.205007  601373 main.go:141] libmachine: Docker is up and running!
	I0127 14:07:39.205024  601373 main.go:141] libmachine: Reticulating splines...
	I0127 14:07:39.205031  601373 client.go:171] duration metric: took 24.938263372s to LocalClient.Create
	I0127 14:07:39.205058  601373 start.go:167] duration metric: took 24.938330128s to libmachine.API.Create "old-k8s-version-456130"
	I0127 14:07:39.205072  601373 start.go:293] postStartSetup for "old-k8s-version-456130" (driver="kvm2")
	I0127 14:07:39.205093  601373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:39.205118  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.205374  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:39.205407  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.210121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212293  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.212324  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212592  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.212757  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.212942  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.213088  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.300676  601373 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:39.305063  601373 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:39.305089  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:39.305171  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:39.305268  601373 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:39.305392  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:39.316817  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:39.342960  601373 start.go:296] duration metric: took 137.875244ms for postStartSetup
	I0127 14:07:39.343015  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:07:39.343611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.753533  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.753907  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.753930  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.754271  601373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:07:39.754483  601373 start.go:128] duration metric: took 25.508299796s to createHost
	I0127 14:07:39.754518  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.756915  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757237  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.757272  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757400  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.757611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757779  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757926  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.758089  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.758248  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:39.758258  601373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:39.879057  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986859.855229643
	
	I0127 14:07:39.879079  601373 fix.go:216] guest clock: 1737986859.855229643
	I0127 14:07:39.879088  601373 fix.go:229] Guest: 2025-01-27 14:07:39.855229643 +0000 UTC Remote: 2025-01-27 14:07:39.75450005 +0000 UTC m=+31.428265457 (delta=100.729593ms)
	I0127 14:07:39.879122  601373 fix.go:200] guest clock delta is within tolerance: 100.729593ms
	I0127 14:07:39.879129  601373 start.go:83] releasing machines lock for "old-k8s-version-456130", held for 25.633123341s
	I0127 14:07:39.879156  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.879419  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.882266  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882753  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.882778  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882967  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883551  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883743  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883842  601373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:39.883882  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.884110  601373 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:39.884136  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.886654  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887060  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887145  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887321  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.887480  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.887648  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887663  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.887669  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887828  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.887853  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.888019  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.888172  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.888306  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.974593  601373 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:39.998185  601373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:40.159948  601373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:40.166159  601373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:40.166229  601373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:40.185635  601373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:07:40.185657  601373 start.go:495] detecting cgroup driver to use...
	I0127 14:07:40.185727  601373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:40.204886  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:40.218758  601373 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:40.218813  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:40.234338  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:40.249194  601373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:40.405723  601373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:40.561717  601373 docker.go:233] disabling docker service ...
	I0127 14:07:40.561787  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:40.577711  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:40.593087  601373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:40.765539  601373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:40.900954  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:40.915793  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:40.935250  601373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 14:07:40.935316  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.945849  601373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:40.945907  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.955796  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.965535  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.975655  601373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:40.985983  601373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:40.995087  601373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:07:40.995142  601373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:07:41.007442  601373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:41.018580  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:41.150827  601373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:41.235346  601373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:41.235426  601373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:41.239989  601373 start.go:563] Will wait 60s for crictl version
	I0127 14:07:41.240037  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:41.243750  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:41.280633  601373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:41.280709  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.312743  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.342444  601373 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 14:07:41.343595  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:41.346163  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346587  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:41.346619  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346796  601373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:41.351141  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:41.363722  601373 kubeadm.go:883] updating cluster {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:41.363830  601373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:07:41.363893  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:41.394760  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:41.394820  601373 ssh_runner.go:195] Run: which lz4
	I0127 14:07:41.398404  601373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:07:41.402316  601373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:07:41.402348  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 14:07:43.199494  601373 crio.go:462] duration metric: took 1.801104328s to copy over tarball
	I0127 14:07:43.199572  601373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:07:45.681236  601373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481625055s)
	I0127 14:07:45.681272  601373 crio.go:469] duration metric: took 2.481746403s to extract the tarball
	I0127 14:07:45.681283  601373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:07:45.723404  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:45.766291  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:45.766315  601373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 14:07:45.766388  601373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.766433  601373 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.766461  601373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.766492  601373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.766533  601373 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 14:07:45.766531  601373 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.766468  601373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.766411  601373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.767990  601373 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 14:07:45.767960  601373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.768071  601373 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.767958  601373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.767963  601373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.767961  601373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.921712  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.928344  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.928604  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.933554  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.933934  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.938018  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.983763  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 14:07:46.033999  601373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 14:07:46.034053  601373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.034110  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068009  601373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 14:07:46.068059  601373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.068054  601373 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 14:07:46.068098  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068110  601373 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.068153  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104861  601373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 14:07:46.104892  601373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 14:07:46.104913  601373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.104924  601373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.104953  601373 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 14:07:46.104964  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104980  601373 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.105007  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104962  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110692  601373 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 14:07:46.110724  601373 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 14:07:46.110749  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110774  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.110698  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.110853  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.118171  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.118213  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.118271  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.133826  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.248375  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.248407  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.259675  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.259775  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.285086  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.285190  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.297983  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.375150  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.406972  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.407097  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.407118  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.429281  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.441476  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.441554  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.519114  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 14:07:46.566524  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 14:07:46.566546  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 14:07:46.566643  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 14:07:46.584390  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 14:07:46.585274  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 14:07:46.585389  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 14:07:46.674534  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:46.815091  601373 cache_images.go:92] duration metric: took 1.048759178s to LoadCachedImages
	W0127 14:07:46.815206  601373 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0127 14:07:46.815228  601373 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.20.0 crio true true} ...
	I0127 14:07:46.815358  601373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-456130 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:46.815423  601373 ssh_runner.go:195] Run: crio config
	I0127 14:07:46.874094  601373 cni.go:84] Creating CNI manager for ""
	I0127 14:07:46.874116  601373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:46.874125  601373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:46.874148  601373 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-456130 NodeName:old-k8s-version-456130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 14:07:46.874318  601373 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-456130"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:46.874398  601373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 14:07:46.884483  601373 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:46.884548  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:46.893923  601373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 14:07:46.910086  601373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:46.926183  601373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 14:07:46.942181  601373 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:46.945997  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:46.957628  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:47.083251  601373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:47.099548  601373 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130 for IP: 192.168.39.11
	I0127 14:07:47.099571  601373 certs.go:194] generating shared ca certs ...
	I0127 14:07:47.099620  601373 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.099825  601373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:47.099872  601373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:47.099883  601373 certs.go:256] generating profile certs ...
	I0127 14:07:47.099941  601373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key
	I0127 14:07:47.099966  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt with IP's: []
	I0127 14:07:47.231224  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt ...
	I0127 14:07:47.231255  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt: {Name:mk2195be2553687d06225303e1e64a924b7177d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.231412  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key ...
	I0127 14:07:47.231425  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key: {Name:mk5eae8d9e14b45dbe6c6e0f3c3649d5f4445d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.261333  601373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a
	I0127 14:07:47.261392  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0127 14:07:47.431351  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a ...
	I0127 14:07:47.431380  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a: {Name:mkcee3647454c013eeabdf2b71abfeb33a090099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436354  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a ...
	I0127 14:07:47.436381  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a: {Name:mk7ade60d44a5e93338e4cd40c9a2ac34565f282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436491  601373 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt
	I0127 14:07:47.436583  601373 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key
	I0127 14:07:47.436654  601373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key
	I0127 14:07:47.436674  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt with IP's: []
	I0127 14:07:47.602017  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt ...
	I0127 14:07:47.602048  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt: {Name:mkdc8c889c4adb19570ac53e2a3880c16e79ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602204  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key ...
	I0127 14:07:47.602217  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key: {Name:mk1e4f6a3159570dde8e09b032b2a9e14d0b7aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602383  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:47.602419  601373 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:47.602429  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:47.602450  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:47.602472  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:47.602492  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:47.602527  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:47.603059  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:47.629391  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:47.653289  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:47.676831  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:47.700476  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 14:07:47.730298  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:47.756643  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:47.780594  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:07:47.804880  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:47.827867  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:47.851041  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:47.875254  601373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:47.891763  601373 ssh_runner.go:195] Run: openssl version
	I0127 14:07:47.897606  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:47.908349  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.912980  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.913025  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.919121  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:47.929695  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:47.943397  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948357  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948408  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.954207  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:47.966803  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:47.979856  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984586  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984628  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.994192  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:48.014136  601373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:48.021745  601373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:07:48.021812  601373 kubeadm.go:392] StartCluster: {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:48.021934  601373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:48.021983  601373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:48.068726  601373 cri.go:89] found id: ""
	I0127 14:07:48.068811  601373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:07:48.079370  601373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:07:48.092372  601373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:07:48.105607  601373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:07:48.105625  601373 kubeadm.go:157] found existing configuration files:
	
	I0127 14:07:48.105664  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:07:48.118318  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:07:48.118379  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:07:48.128022  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:07:48.137623  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:07:48.137689  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:07:48.149172  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.161255  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:07:48.161297  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.176810  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:07:48.185831  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:07:48.185885  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:07:48.195275  601373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:07:48.482045  601373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:09:45.518482  601373 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:09:45.518597  601373 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:09:45.520473  601373 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:09:45.520526  601373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:09:45.520649  601373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:09:45.520791  601373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:09:45.520909  601373 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:09:45.520997  601373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:09:45.522588  601373 out.go:235]   - Generating certificates and keys ...
	I0127 14:09:45.522688  601373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:09:45.522771  601373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:09:45.522856  601373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:09:45.522951  601373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:09:45.523036  601373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:09:45.523105  601373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:09:45.523177  601373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:09:45.523372  601373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0127 14:09:45.523453  601373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:09:45.523659  601373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	I0127 14:09:45.523773  601373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:09:45.523884  601373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:09:45.523951  601373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:09:45.524033  601373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:09:45.524121  601373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:09:45.524202  601373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:09:45.524302  601373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:09:45.524390  601373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:09:45.524510  601373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:09:45.524615  601373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:09:45.524676  601373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:09:45.524767  601373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:09:45.525979  601373 out.go:235]   - Booting up control plane ...
	I0127 14:09:45.526078  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:09:45.526177  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:09:45.526271  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:09:45.526373  601373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:09:45.526568  601373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:09:45.526623  601373 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:09:45.526716  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:45.526951  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:45.527054  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:45.527325  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:45.527444  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:45.527637  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:45.527696  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:45.527864  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:45.527966  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:09:45.528174  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:09:45.528190  601373 kubeadm.go:310] 
	I0127 14:09:45.528244  601373 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:09:45.528299  601373 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:09:45.528307  601373 kubeadm.go:310] 
	I0127 14:09:45.528363  601373 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:09:45.528405  601373 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:09:45.528488  601373 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:09:45.528494  601373 kubeadm.go:310] 
	I0127 14:09:45.528580  601373 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:09:45.528608  601373 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:09:45.528636  601373 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:09:45.528643  601373 kubeadm.go:310] 
	I0127 14:09:45.528733  601373 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:09:45.528802  601373 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:09:45.528808  601373 kubeadm.go:310] 
	I0127 14:09:45.528886  601373 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:09:45.528957  601373 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:09:45.529020  601373 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:09:45.529077  601373 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:09:45.529162  601373 kubeadm.go:310] 
	W0127 14:09:45.529212  601373 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-456130] and IPs [192.168.39.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 14:09:45.529251  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:09:46.005965  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:09:46.020432  601373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:09:46.030201  601373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:09:46.030228  601373 kubeadm.go:157] found existing configuration files:
	
	I0127 14:09:46.030284  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:09:46.039436  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:09:46.039500  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:09:46.048696  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:09:46.057538  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:09:46.057617  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:09:46.066931  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:09:46.076232  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:09:46.076275  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:09:46.085799  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:09:46.094774  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:09:46.094823  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:09:46.103843  601373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:09:46.172160  601373 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:09:46.172260  601373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:09:46.313710  601373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:09:46.313844  601373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:09:46.313981  601373 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:09:46.522457  601373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:09:46.524018  601373 out.go:235]   - Generating certificates and keys ...
	I0127 14:09:46.524141  601373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:09:46.524228  601373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:09:46.524341  601373 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:09:46.524428  601373 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:09:46.524699  601373 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:09:46.525006  601373 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:09:46.525493  601373 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:09:46.525994  601373 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:09:46.526516  601373 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:09:46.527035  601373 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:09:46.527227  601373 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:09:46.527335  601373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:09:46.611876  601373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:09:46.736374  601373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:09:46.822373  601373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:09:47.045466  601373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:09:47.065839  601373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:09:47.066782  601373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:09:47.066869  601373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:09:47.195929  601373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:09:47.197401  601373 out.go:235]   - Booting up control plane ...
	I0127 14:09:47.197530  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:09:47.204918  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:09:47.205024  601373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:09:47.205865  601373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:09:47.207778  601373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:10:27.210483  601373 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:10:27.210693  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:10:27.210946  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:10:32.211360  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:10:32.211614  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:10:42.212371  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:10:42.212681  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:11:02.211592  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:11:02.211810  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:11:42.211019  601373 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:11:42.211257  601373 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:11:42.211650  601373 kubeadm.go:310] 
	I0127 14:11:42.211690  601373 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:11:42.211730  601373 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:11:42.211737  601373 kubeadm.go:310] 
	I0127 14:11:42.211772  601373 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:11:42.211803  601373 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:11:42.211896  601373 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:11:42.211903  601373 kubeadm.go:310] 
	I0127 14:11:42.212017  601373 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:11:42.212072  601373 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:11:42.212118  601373 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:11:42.212128  601373 kubeadm.go:310] 
	I0127 14:11:42.212287  601373 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:11:42.212403  601373 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:11:42.212417  601373 kubeadm.go:310] 
	I0127 14:11:42.212559  601373 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:11:42.212660  601373 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:11:42.212727  601373 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:11:42.212816  601373 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:11:42.212828  601373 kubeadm.go:310] 
	I0127 14:11:42.213738  601373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:11:42.213852  601373 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:11:42.213954  601373 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:11:42.214050  601373 kubeadm.go:394] duration metric: took 3m54.192244016s to StartCluster
	I0127 14:11:42.214105  601373 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:11:42.214175  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:11:42.266826  601373 cri.go:89] found id: ""
	I0127 14:11:42.266856  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.266866  601373 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:11:42.266875  601373 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:11:42.266954  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:11:42.304300  601373 cri.go:89] found id: ""
	I0127 14:11:42.304327  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.304334  601373 logs.go:284] No container was found matching "etcd"
	I0127 14:11:42.304340  601373 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:11:42.304401  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:11:42.348434  601373 cri.go:89] found id: ""
	I0127 14:11:42.348463  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.348472  601373 logs.go:284] No container was found matching "coredns"
	I0127 14:11:42.348481  601373 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:11:42.348547  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:11:42.385818  601373 cri.go:89] found id: ""
	I0127 14:11:42.385848  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.385858  601373 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:11:42.385867  601373 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:11:42.385938  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:11:42.430408  601373 cri.go:89] found id: ""
	I0127 14:11:42.430435  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.430442  601373 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:11:42.430449  601373 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:11:42.430510  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:11:42.485359  601373 cri.go:89] found id: ""
	I0127 14:11:42.485390  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.485397  601373 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:11:42.485403  601373 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:11:42.485455  601373 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:11:42.526076  601373 cri.go:89] found id: ""
	I0127 14:11:42.526103  601373 logs.go:282] 0 containers: []
	W0127 14:11:42.526110  601373 logs.go:284] No container was found matching "kindnet"
	I0127 14:11:42.526126  601373 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:11:42.526138  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:11:42.632836  601373 logs.go:123] Gathering logs for container status ...
	I0127 14:11:42.632865  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:11:42.675550  601373 logs.go:123] Gathering logs for kubelet ...
	I0127 14:11:42.675578  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:11:42.726326  601373 logs.go:123] Gathering logs for dmesg ...
	I0127 14:11:42.726359  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:11:42.740009  601373 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:11:42.740033  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:11:42.868780  601373 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0127 14:11:42.868808  601373 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 14:11:42.868853  601373 out.go:270] * 
	* 
	W0127 14:11:42.868914  601373 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:11:42.868933  601373 out.go:270] * 
	* 
	W0127 14:11:42.870135  601373 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 14:11:42.873633  601373 out.go:201] 
	W0127 14:11:42.874962  601373 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:11:42.875016  601373 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 14:11:42.875044  601373 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 14:11:42.876554  601373 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 6 (290.206679ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:11:43.206416  604141 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-456130" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-456130" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-966446 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-966446 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.786765388s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-966446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-966446" primary control-plane node in "pause-966446" cluster
	* Updating the running kvm2 "pause-966446" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-966446" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:07:19.435534  601531 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:07:19.435858  601531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:19.435870  601531 out.go:358] Setting ErrFile to fd 2...
	I0127 14:07:19.435878  601531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:19.436187  601531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:07:19.436903  601531 out.go:352] Setting JSON to false
	I0127 14:07:19.438204  601531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17384,"bootTime":1737969455,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:07:19.438370  601531 start.go:139] virtualization: kvm guest
	I0127 14:07:19.440403  601531 out.go:177] * [pause-966446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:07:19.441752  601531 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:07:19.441752  601531 notify.go:220] Checking for updates...
	I0127 14:07:19.442966  601531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:07:19.444252  601531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:07:19.445369  601531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:19.446469  601531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:07:19.447722  601531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:07:19.449836  601531 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:19.450454  601531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:19.450542  601531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:19.467247  601531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0127 14:07:19.467684  601531 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:19.468237  601531 main.go:141] libmachine: Using API Version  1
	I0127 14:07:19.468261  601531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:19.468630  601531 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:19.468844  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:19.469128  601531 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:07:19.469546  601531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:19.469612  601531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:19.485019  601531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0127 14:07:19.485512  601531 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:19.485994  601531 main.go:141] libmachine: Using API Version  1
	I0127 14:07:19.486025  601531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:19.486480  601531 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:19.486692  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:19.525695  601531 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:07:19.526844  601531 start.go:297] selected driver: kvm2
	I0127 14:07:19.526865  601531 start.go:901] validating driver "kvm2" against &{Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:19.527043  601531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:07:19.527499  601531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:19.527589  601531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:07:19.544173  601531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:07:19.545204  601531 cni.go:84] Creating CNI manager for ""
	I0127 14:07:19.545307  601531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:19.545387  601531 start.go:340] cluster config:
	{Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:19.545574  601531 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:19.547942  601531 out.go:177] * Starting "pause-966446" primary control-plane node in "pause-966446" cluster
	I0127 14:07:19.549030  601531 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:19.549078  601531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:07:19.549091  601531 cache.go:56] Caching tarball of preloaded images
	I0127 14:07:19.549182  601531 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:07:19.549195  601531 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:07:19.549348  601531 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/config.json ...
	I0127 14:07:19.549600  601531 start.go:360] acquireMachinesLock for pause-966446: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:07:39.879235  601531 start.go:364] duration metric: took 20.329566031s to acquireMachinesLock for "pause-966446"
	I0127 14:07:39.879307  601531 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:07:39.879319  601531 fix.go:54] fixHost starting: 
	I0127 14:07:39.879721  601531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:39.879771  601531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:39.900317  601531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0127 14:07:39.900814  601531 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:39.901373  601531 main.go:141] libmachine: Using API Version  1
	I0127 14:07:39.901400  601531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:39.901836  601531 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:39.902067  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:39.902217  601531 main.go:141] libmachine: (pause-966446) Calling .GetState
	I0127 14:07:39.903700  601531 fix.go:112] recreateIfNeeded on pause-966446: state=Running err=<nil>
	W0127 14:07:39.903736  601531 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:07:39.905821  601531 out.go:177] * Updating the running kvm2 "pause-966446" VM ...
	I0127 14:07:39.906911  601531 machine.go:93] provisionDockerMachine start ...
	I0127 14:07:39.906949  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:39.907481  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:39.910325  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910762  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:39.910797  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910950  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:39.911119  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911295  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911446  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:39.911572  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.911826  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:39.911845  601531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:07:40.027037  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.027073  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027344  601531 buildroot.go:166] provisioning hostname "pause-966446"
	I0127 14:07:40.027375  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027550  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.030738  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031193  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.031218  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031433  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.031655  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031841  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031991  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.032158  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.032374  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.032387  601531 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-966446 && echo "pause-966446" | sudo tee /etc/hostname
	I0127 14:07:40.166642  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.166671  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.170024  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170512  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.170565  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170778  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.170976  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171116  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.171432  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.171606  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.171624  601531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-966446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-966446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-966446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:40.292064  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:40.292093  601531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:40.292114  601531 buildroot.go:174] setting up certificates
	I0127 14:07:40.292125  601531 provision.go:84] configureAuth start
	I0127 14:07:40.292139  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.292445  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:40.295453  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.295895  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.295941  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.296050  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.298488  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.298935  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.298963  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.299181  601531 provision.go:143] copyHostCerts
	I0127 14:07:40.299250  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:40.299282  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:40.299362  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:40.299525  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:40.299542  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:40.299583  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:40.299703  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:40.299718  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:40.299754  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:40.299869  601531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.pause-966446 san=[127.0.0.1 192.168.61.72 localhost minikube pause-966446]
	I0127 14:07:40.473785  601531 provision.go:177] copyRemoteCerts
	I0127 14:07:40.473854  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:40.473891  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.476480  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.476874  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.476904  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.477238  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.477436  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.477660  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.477835  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:40.577510  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:40.605346  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0127 14:07:40.635897  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:07:40.662885  601531 provision.go:87] duration metric: took 370.74521ms to configureAuth
	I0127 14:07:40.662909  601531 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:40.663150  601531 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:40.663247  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.666176  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666572  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.666607  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666906  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.667096  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667280  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667426  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.667580  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.667771  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.667787  601531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:48.140390  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:48.140421  601531 machine.go:96] duration metric: took 8.233480321s to provisionDockerMachine
	I0127 14:07:48.140437  601531 start.go:293] postStartSetup for "pause-966446" (driver="kvm2")
	I0127 14:07:48.140450  601531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:48.140499  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.140860  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:48.140908  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.143436  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143789  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.143817  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143998  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.144214  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.144403  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.144558  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.236307  601531 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:48.241621  601531 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:48.241645  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:48.241695  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:48.241772  601531 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:48.241852  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:48.253943  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:48.280295  601531 start.go:296] duration metric: took 139.846011ms for postStartSetup
	I0127 14:07:48.280327  601531 fix.go:56] duration metric: took 8.401009659s for fixHost
	I0127 14:07:48.280349  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.283269  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283690  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.283721  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283910  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.284109  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284416  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.284557  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:48.284780  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:48.284791  601531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:48.398447  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986868.355909952
	
	I0127 14:07:48.398480  601531 fix.go:216] guest clock: 1737986868.355909952
	I0127 14:07:48.398491  601531 fix.go:229] Guest: 2025-01-27 14:07:48.355909952 +0000 UTC Remote: 2025-01-27 14:07:48.28033142 +0000 UTC m=+28.896632167 (delta=75.578532ms)
	I0127 14:07:48.398520  601531 fix.go:200] guest clock delta is within tolerance: 75.578532ms
	I0127 14:07:48.398527  601531 start.go:83] releasing machines lock for "pause-966446", held for 8.519261631s
	I0127 14:07:48.398569  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.398896  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:48.402171  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402618  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.402667  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402940  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403483  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403689  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403796  601531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:48.403843  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.403901  601531 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:48.403928  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.406939  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407341  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407407  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407482  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407667  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.407898  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.407938  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407970  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.408115  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.408274  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.408278  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408459  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408473  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.408626  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.524775  601531 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:48.532020  601531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:48.694345  601531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:48.704000  601531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:48.704077  601531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:48.719041  601531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 14:07:48.719064  601531 start.go:495] detecting cgroup driver to use...
	I0127 14:07:48.719143  601531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:48.742423  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:48.761918  601531 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:48.761979  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:48.777294  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:48.792034  601531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:48.954341  601531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:49.087497  601531 docker.go:233] disabling docker service ...
	I0127 14:07:49.087581  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:49.105330  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:49.119089  601531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:49.297287  601531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:49.633836  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:49.794413  601531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:49.967991  601531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:07:49.968073  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.084660  601531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:50.084743  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.131683  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.199059  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.266955  601531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:50.289803  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.308029  601531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.346036  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.378069  601531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:50.396951  601531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:50.417736  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:50.685609  601531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:51.151301  601531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:51.151405  601531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:51.156366  601531 start.go:563] Will wait 60s for crictl version
	I0127 14:07:51.156427  601531 ssh_runner.go:195] Run: which crictl
	I0127 14:07:51.160621  601531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:51.202254  601531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:51.202354  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.232941  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.309473  601531 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:07:51.310590  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:51.314164  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.314691  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:51.314723  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.315015  601531 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:51.342772  601531 kubeadm.go:883] updating cluster {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:51.342916  601531 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:51.342980  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.540139  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.540176  601531 crio.go:433] Images already preloaded, skipping extraction
	I0127 14:07:51.540245  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.723140  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.723176  601531 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:07:51.723208  601531 kubeadm.go:934] updating node { 192.168.61.72 8443 v1.32.1 crio true true} ...
	I0127 14:07:51.723370  601531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-966446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:51.723451  601531 ssh_runner.go:195] Run: crio config
	I0127 14:07:51.833408  601531 cni.go:84] Creating CNI manager for ""
	I0127 14:07:51.833430  601531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:51.833440  601531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:51.833472  601531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.72 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-966446 NodeName:pause-966446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:07:51.833663  601531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-966446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:51.833759  601531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:07:51.849643  601531 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:51.849734  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:51.859622  601531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 14:07:51.878933  601531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:51.927875  601531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 14:07:51.946086  601531 ssh_runner.go:195] Run: grep 192.168.61.72	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:51.954227  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:52.111702  601531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:52.136062  601531 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446 for IP: 192.168.61.72
	I0127 14:07:52.136089  601531 certs.go:194] generating shared ca certs ...
	I0127 14:07:52.136111  601531 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:52.136278  601531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:52.136342  601531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:52.136354  601531 certs.go:256] generating profile certs ...
	I0127 14:07:52.136983  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/client.key
	I0127 14:07:52.137115  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key.f1093c80
	I0127 14:07:52.137177  601531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key
	I0127 14:07:52.137354  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:52.137393  601531 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:52.137408  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:52.137445  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:52.137487  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:52.137518  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:52.137570  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:52.139063  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:52.163001  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:52.186084  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:52.208609  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:52.231068  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:07:52.255538  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:52.279172  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:52.304122  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:07:52.327911  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:52.350495  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:52.374464  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:52.413824  601531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:52.429570  601531 ssh_runner.go:195] Run: openssl version
	I0127 14:07:52.435317  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:52.446068  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450652  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450700  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.456430  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:52.466347  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:52.478172  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483080  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483133  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.488944  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:52.498827  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:52.510270  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.514978  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.515019  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.520460  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:52.529770  601531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:52.534208  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:07:52.539664  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:07:52.545155  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:07:52.550674  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:07:52.556058  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:07:52.561391  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:07:52.566877  601531 kubeadm.go:392] StartCluster: {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:52.566970  601531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:52.567004  601531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:52.602509  601531 cri.go:89] found id: "9a4a6873a790179033815b842a490593ca7e247ab4c35927ab123d40b5b1c1b0"
	I0127 14:07:52.602533  601531 cri.go:89] found id: "153475c34d724a00aae02973ec25d6ba069b6798d663e0fb03fdcb678fbf90dc"
	I0127 14:07:52.602539  601531 cri.go:89] found id: "538bc3dc9efa53fa541ba54500003bc5a9f4ecc98ce84f4299f09c6519df409f"
	I0127 14:07:52.602544  601531 cri.go:89] found id: "ddaac33d82a8a7fca412c3f5cce780ba01829a09277d596b2eb83c688aa40627"
	I0127 14:07:52.602548  601531 cri.go:89] found id: "2fff1ca9ed0fb4d432dbddcbfba74d463e908d8e323e8f7da8389d0e159e27eb"
	I0127 14:07:52.602552  601531 cri.go:89] found id: "67099ee481deaf66bccd062bf3bbfde8a62b7a39d5819e92b57acf9ddbb3d637"
	I0127 14:07:52.602556  601531 cri.go:89] found id: ""
	I0127 14:07:52.602593  601531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-966446 -n pause-966446
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-966446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-966446 logs -n 25: (2.255159204s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo find                           | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo crio                           | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-418372                                     | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -p stopped-upgrade-736772                            | minikube               | jenkins | v1.26.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	| ssh     | -p NoKubernetes-412983 sudo                          | NoKubernetes-412983    | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl is-active --quiet                          |                        |         |         |                     |                     |
	|         | service kubelet                                      |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-412983                               | NoKubernetes-412983    | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -p pause-966446 --memory=2048                        | pause-966446           | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:07 UTC |
	|         | --install-addons=false                               |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-736772 stop                          | minikube               | jenkins | v1.26.0 | 27 Jan 25 14:06 UTC | 27 Jan 25 14:06 UTC |
	| start   | -p stopped-upgrade-736772                            | stopped-upgrade-736772 | jenkins | v1.35.0 | 27 Jan 25 14:06 UTC | 27 Jan 25 14:07 UTC |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p stopped-upgrade-736772                            | stopped-upgrade-736772 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	| start   | -p cert-expiration-335486                            | cert-expiration-335486 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	|         | --memory=2048                                        |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-456130                            | old-k8s-version-456130 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	| start   | -p pause-966446                                      | pause-966446           | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:08 UTC |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-335486                            | cert-expiration-335486 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	| start   | -p no-preload-183205                                 | no-preload-183205      | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:07:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:07:39.995138  601809 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:07:39.995253  601809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:39.995265  601809 out.go:358] Setting ErrFile to fd 2...
	I0127 14:07:39.995271  601809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:39.995477  601809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:07:39.996057  601809 out.go:352] Setting JSON to false
	I0127 14:07:39.997072  601809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17405,"bootTime":1737969455,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:07:39.997182  601809 start.go:139] virtualization: kvm guest
	I0127 14:07:39.998902  601809 out.go:177] * [no-preload-183205] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:07:40.000550  601809 notify.go:220] Checking for updates...
	I0127 14:07:40.000559  601809 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:07:40.001745  601809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:07:40.002922  601809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:07:40.004219  601809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:40.005491  601809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:07:40.006808  601809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:07:40.008634  601809 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:40.008824  601809 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:40.009034  601809 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:40.009161  601809 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:07:40.050865  601809 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:07:40.052008  601809 start.go:297] selected driver: kvm2
	I0127 14:07:40.052029  601809 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:07:40.052044  601809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:07:40.053050  601809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.053145  601809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:07:40.069538  601809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:07:40.069633  601809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:07:40.069954  601809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:40.070033  601809 cni.go:84] Creating CNI manager for ""
	I0127 14:07:40.070116  601809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:40.070128  601809 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:07:40.070206  601809 start.go:340] cluster config:
	{Name:no-preload-183205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-183205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0127 14:07:40.070401  601809 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.071793  601809 out.go:177] * Starting "no-preload-183205" primary control-plane node in "no-preload-183205" cluster
	I0127 14:07:38.454864  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-456130
	
	I0127 14:07:38.454932  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.457742  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458173  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.458207  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458350  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.458587  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458762  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458927  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.459102  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.459311  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.459349  601373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-456130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-456130/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-456130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:38.585645  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:38.585683  601373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:38.585741  601373 buildroot.go:174] setting up certificates
	I0127 14:07:38.585755  601373 provision.go:84] configureAuth start
	I0127 14:07:38.585772  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:38.586102  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:38.589345  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.589823  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589991  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.592421  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.592828  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.592860  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.593007  601373 provision.go:143] copyHostCerts
	I0127 14:07:38.593064  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:38.593091  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:38.593170  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:38.593347  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:38.593362  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:38.593392  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:38.593472  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:38.593481  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:38.593503  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:38.593570  601373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-456130 san=[127.0.0.1 192.168.39.11 localhost minikube old-k8s-version-456130]
	I0127 14:07:38.768898  601373 provision.go:177] copyRemoteCerts
	I0127 14:07:38.768964  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:38.768999  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.771730  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772083  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.772124  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772282  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.772477  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.772635  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.772784  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:38.859870  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:38.885052  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 14:07:38.911635  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:07:38.935458  601373 provision.go:87] duration metric: took 349.687848ms to configureAuth
	I0127 14:07:38.935490  601373 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:38.935724  601373 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:38.935827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.939100  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939413  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.939445  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939604  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.939827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940036  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940197  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.940380  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.940629  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.940652  601373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:39.198836  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:39.198866  601373 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:07:39.198874  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetURL
	I0127 14:07:39.200067  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | using libvirt version 6000000
	I0127 14:07:39.203833  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.204766  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.204793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.205007  601373 main.go:141] libmachine: Docker is up and running!
	I0127 14:07:39.205024  601373 main.go:141] libmachine: Reticulating splines...
	I0127 14:07:39.205031  601373 client.go:171] duration metric: took 24.938263372s to LocalClient.Create
	I0127 14:07:39.205058  601373 start.go:167] duration metric: took 24.938330128s to libmachine.API.Create "old-k8s-version-456130"
	I0127 14:07:39.205072  601373 start.go:293] postStartSetup for "old-k8s-version-456130" (driver="kvm2")
	I0127 14:07:39.205093  601373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:39.205118  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.205374  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:39.205407  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.210121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212293  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.212324  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212592  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.212757  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.212942  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.213088  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.300676  601373 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:39.305063  601373 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:39.305089  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:39.305171  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:39.305268  601373 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:39.305392  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:39.316817  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:39.342960  601373 start.go:296] duration metric: took 137.875244ms for postStartSetup
	I0127 14:07:39.343015  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:07:39.343611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.753533  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.753907  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.753930  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.754271  601373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:07:39.754483  601373 start.go:128] duration metric: took 25.508299796s to createHost
	I0127 14:07:39.754518  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.756915  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757237  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.757272  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757400  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.757611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757779  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757926  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.758089  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.758248  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:39.758258  601373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:39.879057  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986859.855229643
	
	I0127 14:07:39.879079  601373 fix.go:216] guest clock: 1737986859.855229643
	I0127 14:07:39.879088  601373 fix.go:229] Guest: 2025-01-27 14:07:39.855229643 +0000 UTC Remote: 2025-01-27 14:07:39.75450005 +0000 UTC m=+31.428265457 (delta=100.729593ms)
	I0127 14:07:39.879122  601373 fix.go:200] guest clock delta is within tolerance: 100.729593ms
	I0127 14:07:39.879129  601373 start.go:83] releasing machines lock for "old-k8s-version-456130", held for 25.633123341s
	I0127 14:07:39.879156  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.879419  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.882266  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882753  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.882778  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882967  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883551  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883743  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883842  601373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:39.883882  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.884110  601373 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:39.884136  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.886654  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887060  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887145  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887321  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.887480  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.887648  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887663  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.887669  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887828  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.887853  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.888019  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.888172  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.888306  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.974593  601373 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:39.998185  601373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:40.159948  601373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:40.166159  601373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:40.166229  601373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:40.185635  601373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:07:40.185657  601373 start.go:495] detecting cgroup driver to use...
	I0127 14:07:40.185727  601373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:40.204886  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:40.218758  601373 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:40.218813  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:40.234338  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:40.249194  601373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:40.405723  601373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:40.561717  601373 docker.go:233] disabling docker service ...
	I0127 14:07:40.561787  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:40.577711  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:40.593087  601373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:40.765539  601373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:40.900954  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:40.915793  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:40.935250  601373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 14:07:40.935316  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.945849  601373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:40.945907  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.955796  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.965535  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.975655  601373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:40.985983  601373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:40.995087  601373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:07:40.995142  601373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:07:41.007442  601373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:41.018580  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:41.150827  601373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:41.235346  601373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:41.235426  601373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:41.239989  601373 start.go:563] Will wait 60s for crictl version
	I0127 14:07:41.240037  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:41.243750  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:41.280633  601373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:41.280709  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.312743  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.342444  601373 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 14:07:41.343595  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:41.346163  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346587  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:41.346619  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346796  601373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:41.351141  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:41.363722  601373 kubeadm.go:883] updating cluster {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:41.363830  601373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:07:41.363893  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:41.394760  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:41.394820  601373 ssh_runner.go:195] Run: which lz4
	I0127 14:07:41.398404  601373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:07:41.402316  601373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:07:41.402348  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 14:07:43.199494  601373 crio.go:462] duration metric: took 1.801104328s to copy over tarball
	I0127 14:07:43.199572  601373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:07:39.906911  601531 machine.go:93] provisionDockerMachine start ...
	I0127 14:07:39.906949  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:39.907481  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:39.910325  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910762  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:39.910797  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910950  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:39.911119  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911295  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911446  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:39.911572  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.911826  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:39.911845  601531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:07:40.027037  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.027073  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027344  601531 buildroot.go:166] provisioning hostname "pause-966446"
	I0127 14:07:40.027375  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027550  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.030738  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031193  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.031218  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031433  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.031655  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031841  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031991  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.032158  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.032374  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.032387  601531 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-966446 && echo "pause-966446" | sudo tee /etc/hostname
	I0127 14:07:40.166642  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.166671  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.170024  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170512  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.170565  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170778  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.170976  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171116  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.171432  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.171606  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.171624  601531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-966446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-966446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-966446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:40.292064  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:40.292093  601531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:40.292114  601531 buildroot.go:174] setting up certificates
	I0127 14:07:40.292125  601531 provision.go:84] configureAuth start
	I0127 14:07:40.292139  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.292445  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:40.295453  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.295895  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.295941  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.296050  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.298488  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.298935  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.298963  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.299181  601531 provision.go:143] copyHostCerts
	I0127 14:07:40.299250  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:40.299282  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:40.299362  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:40.299525  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:40.299542  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:40.299583  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:40.299703  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:40.299718  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:40.299754  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:40.299869  601531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.pause-966446 san=[127.0.0.1 192.168.61.72 localhost minikube pause-966446]
	I0127 14:07:40.473785  601531 provision.go:177] copyRemoteCerts
	I0127 14:07:40.473854  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:40.473891  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.476480  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.476874  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.476904  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.477238  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.477436  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.477660  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.477835  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:40.577510  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:40.605346  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0127 14:07:40.635897  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:07:40.662885  601531 provision.go:87] duration metric: took 370.74521ms to configureAuth
	I0127 14:07:40.662909  601531 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:40.663150  601531 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:40.663247  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.666176  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666572  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.666607  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666906  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.667096  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667280  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667426  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.667580  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.667771  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.667787  601531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:40.072821  601809 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:40.073002  601809 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/config.json ...
	I0127 14:07:40.073044  601809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/config.json: {Name:mka9c8ee9958e3f7ec7463281626fe1e3efb5598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:40.073113  601809 cache.go:107] acquiring lock: {Name:mk66b4f28a03faaae643efe520674fad2917cdda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073136  601809 cache.go:107] acquiring lock: {Name:mk6fbc282aded7ec6720a3c60ca5a3553bfd9648 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073133  601809 cache.go:107] acquiring lock: {Name:mk36c363b77b19af873b7dba68e6372e01e796ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073221  601809 start.go:360] acquireMachinesLock for no-preload-183205: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:07:40.073270  601809 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 14:07:40.073158  601809 cache.go:107] acquiring lock: {Name:mk5c6e88180d8da47162934c7e3e1802d2b17603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073299  601809 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 14:07:40.073281  601809 cache.go:115] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 14:07:40.073343  601809 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 223.584µs
	I0127 14:07:40.073354  601809 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 14:07:40.073365  601809 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 14:07:40.073481  601809 cache.go:107] acquiring lock: {Name:mk43dc5afe3fb66354ecfbaac283409e7be87f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073593  601809 cache.go:107] acquiring lock: {Name:mk8e22d7888ff554b79f22bad43b84267c64f3cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073655  601809 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 14:07:40.073639  601809 cache.go:107] acquiring lock: {Name:mkb3dbf54b3c350f3252e35e2756d0e31b75ee20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073716  601809 cache.go:107] acquiring lock: {Name:mk27af2a77b4a1751a1c6ee4547349937489ce95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073760  601809 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 14:07:40.073805  601809 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 14:07:40.073894  601809 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 14:07:40.074857  601809 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 14:07:40.074870  601809 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 14:07:40.074866  601809 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 14:07:40.074894  601809 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 14:07:40.074899  601809 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 14:07:40.074882  601809 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 14:07:40.074968  601809 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 14:07:40.245790  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 14:07:40.248406  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 14:07:40.251949  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 14:07:40.252149  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 14:07:40.259010  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 14:07:40.268652  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 14:07:40.275859  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0127 14:07:40.354223  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 14:07:40.354255  601809 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 280.685672ms
	I0127 14:07:40.354270  601809 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 14:07:40.775887  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 14:07:40.775913  601809 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 702.785131ms
	I0127 14:07:40.775927  601809 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 14:07:41.770463  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 14:07:41.770497  601809 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 1.696827747s
	I0127 14:07:41.770513  601809 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 14:07:41.809456  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 14:07:41.809491  601809 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.736012385s
	I0127 14:07:41.809507  601809 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 14:07:41.900279  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 14:07:41.900316  601809 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 1.827215848s
	I0127 14:07:41.900332  601809 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 14:07:41.918472  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 14:07:41.918505  601809 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 1.845351664s
	I0127 14:07:41.918520  601809 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 14:07:42.242561  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 14:07:42.242597  601809 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.169067471s
	I0127 14:07:42.242613  601809 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 14:07:42.242636  601809 cache.go:87] Successfully saved all images to host disk.
	I0127 14:07:45.681236  601373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481625055s)
	I0127 14:07:45.681272  601373 crio.go:469] duration metric: took 2.481746403s to extract the tarball
	I0127 14:07:45.681283  601373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:07:45.723404  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:45.766291  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:45.766315  601373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 14:07:45.766388  601373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.766433  601373 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.766461  601373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.766492  601373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.766533  601373 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 14:07:45.766531  601373 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.766468  601373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.766411  601373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.767990  601373 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 14:07:45.767960  601373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.768071  601373 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.767958  601373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.767963  601373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.767961  601373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.921712  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.928344  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.928604  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.933554  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.933934  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.938018  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.983763  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 14:07:46.033999  601373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 14:07:46.034053  601373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.034110  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068009  601373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 14:07:46.068059  601373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.068054  601373 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 14:07:46.068098  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068110  601373 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.068153  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104861  601373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 14:07:46.104892  601373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 14:07:46.104913  601373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.104924  601373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.104953  601373 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 14:07:46.104964  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104980  601373 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.105007  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104962  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110692  601373 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 14:07:46.110724  601373 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 14:07:46.110749  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110774  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.110698  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.110853  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.118171  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.118213  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.118271  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.133826  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.248375  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.248407  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.259675  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.259775  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.285086  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.285190  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.297983  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.375150  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.406972  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.407097  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.407118  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.429281  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.441476  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.441554  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.519114  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 14:07:46.566524  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 14:07:46.566546  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 14:07:46.566643  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 14:07:46.584390  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 14:07:46.585274  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 14:07:46.585389  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 14:07:46.674534  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:46.815091  601373 cache_images.go:92] duration metric: took 1.048759178s to LoadCachedImages
	W0127 14:07:46.815206  601373 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0127 14:07:46.815228  601373 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.20.0 crio true true} ...
	I0127 14:07:46.815358  601373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-456130 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:46.815423  601373 ssh_runner.go:195] Run: crio config
	I0127 14:07:46.874094  601373 cni.go:84] Creating CNI manager for ""
	I0127 14:07:46.874116  601373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:46.874125  601373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:46.874148  601373 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-456130 NodeName:old-k8s-version-456130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 14:07:46.874318  601373 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-456130"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:46.874398  601373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 14:07:46.884483  601373 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:46.884548  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:46.893923  601373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 14:07:46.910086  601373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:46.926183  601373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 14:07:46.942181  601373 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:46.945997  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:46.957628  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:47.083251  601373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:47.099548  601373 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130 for IP: 192.168.39.11
	I0127 14:07:47.099571  601373 certs.go:194] generating shared ca certs ...
	I0127 14:07:47.099620  601373 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.099825  601373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:47.099872  601373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:47.099883  601373 certs.go:256] generating profile certs ...
	I0127 14:07:47.099941  601373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key
	I0127 14:07:47.099966  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt with IP's: []
	I0127 14:07:47.231224  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt ...
	I0127 14:07:47.231255  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt: {Name:mk2195be2553687d06225303e1e64a924b7177d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.231412  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key ...
	I0127 14:07:47.231425  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key: {Name:mk5eae8d9e14b45dbe6c6e0f3c3649d5f4445d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.261333  601373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a
	I0127 14:07:47.261392  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0127 14:07:47.431351  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a ...
	I0127 14:07:47.431380  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a: {Name:mkcee3647454c013eeabdf2b71abfeb33a090099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436354  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a ...
	I0127 14:07:47.436381  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a: {Name:mk7ade60d44a5e93338e4cd40c9a2ac34565f282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436491  601373 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt
	I0127 14:07:47.436583  601373 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key
	I0127 14:07:47.436654  601373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key
	I0127 14:07:47.436674  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt with IP's: []
	I0127 14:07:47.602017  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt ...
	I0127 14:07:47.602048  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt: {Name:mkdc8c889c4adb19570ac53e2a3880c16e79ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602204  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key ...
	I0127 14:07:47.602217  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key: {Name:mk1e4f6a3159570dde8e09b032b2a9e14d0b7aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602383  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:47.602419  601373 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:47.602429  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:47.602450  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:47.602472  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:47.602492  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:47.602527  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:47.603059  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:47.629391  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:47.653289  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:47.676831  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:47.700476  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 14:07:47.730298  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:47.756643  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:47.780594  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:07:47.804880  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:47.827867  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:47.851041  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:47.875254  601373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:47.891763  601373 ssh_runner.go:195] Run: openssl version
	I0127 14:07:47.897606  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:47.908349  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.912980  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.913025  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.919121  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:47.929695  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:47.943397  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948357  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948408  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.954207  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:47.966803  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:47.979856  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984586  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984628  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.994192  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:48.014136  601373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:48.021745  601373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:07:48.021812  601373 kubeadm.go:392] StartCluster: {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:48.021934  601373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:48.021983  601373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:48.068726  601373 cri.go:89] found id: ""
	I0127 14:07:48.068811  601373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:07:48.079370  601373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:07:48.092372  601373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:07:48.105607  601373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:07:48.105625  601373 kubeadm.go:157] found existing configuration files:
	
	I0127 14:07:48.105664  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:07:48.118318  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:07:48.118379  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:07:48.128022  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:07:48.137623  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:07:48.137689  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:07:48.149172  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.161255  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:07:48.161297  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.176810  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:07:48.185831  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:07:48.185885  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:07:48.195275  601373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:07:48.398667  601809 start.go:364] duration metric: took 8.325386996s to acquireMachinesLock for "no-preload-183205"
	I0127 14:07:48.398731  601809 start.go:93] Provisioning new machine with config: &{Name:no-preload-183205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-183205
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:07:48.398906  601809 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:07:48.140390  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:48.140421  601531 machine.go:96] duration metric: took 8.233480321s to provisionDockerMachine
	I0127 14:07:48.140437  601531 start.go:293] postStartSetup for "pause-966446" (driver="kvm2")
	I0127 14:07:48.140450  601531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:48.140499  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.140860  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:48.140908  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.143436  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143789  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.143817  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143998  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.144214  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.144403  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.144558  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.236307  601531 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:48.241621  601531 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:48.241645  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:48.241695  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:48.241772  601531 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:48.241852  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:48.253943  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:48.280295  601531 start.go:296] duration metric: took 139.846011ms for postStartSetup
	I0127 14:07:48.280327  601531 fix.go:56] duration metric: took 8.401009659s for fixHost
	I0127 14:07:48.280349  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.283269  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283690  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.283721  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283910  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.284109  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284416  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.284557  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:48.284780  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:48.284791  601531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:48.398447  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986868.355909952
	
	I0127 14:07:48.398480  601531 fix.go:216] guest clock: 1737986868.355909952
	I0127 14:07:48.398491  601531 fix.go:229] Guest: 2025-01-27 14:07:48.355909952 +0000 UTC Remote: 2025-01-27 14:07:48.28033142 +0000 UTC m=+28.896632167 (delta=75.578532ms)
	I0127 14:07:48.398520  601531 fix.go:200] guest clock delta is within tolerance: 75.578532ms
	I0127 14:07:48.398527  601531 start.go:83] releasing machines lock for "pause-966446", held for 8.519261631s
	I0127 14:07:48.398569  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.398896  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:48.402171  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402618  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.402667  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402940  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403483  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403689  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403796  601531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:48.403843  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.403901  601531 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:48.403928  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.406939  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407341  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407407  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407482  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407667  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.407898  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.407938  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407970  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.408115  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.408274  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.408278  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408459  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408473  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.408626  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.524775  601531 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:48.532020  601531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:48.694345  601531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:48.704000  601531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:48.704077  601531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:48.719041  601531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 14:07:48.719064  601531 start.go:495] detecting cgroup driver to use...
	I0127 14:07:48.719143  601531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:48.742423  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:48.761918  601531 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:48.761979  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:48.777294  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:48.792034  601531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:48.954341  601531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:49.087497  601531 docker.go:233] disabling docker service ...
	I0127 14:07:49.087581  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:49.105330  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:49.119089  601531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:49.297287  601531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:48.400546  601809 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 14:07:48.400763  601809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:48.400808  601809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:48.421986  601809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0127 14:07:48.422419  601809 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:48.422927  601809 main.go:141] libmachine: Using API Version  1
	I0127 14:07:48.422948  601809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:48.423288  601809 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:48.423451  601809 main.go:141] libmachine: (no-preload-183205) Calling .GetMachineName
	I0127 14:07:48.423545  601809 main.go:141] libmachine: (no-preload-183205) Calling .DriverName
	I0127 14:07:48.423643  601809 start.go:159] libmachine.API.Create for "no-preload-183205" (driver="kvm2")
	I0127 14:07:48.423674  601809 client.go:168] LocalClient.Create starting
	I0127 14:07:48.423709  601809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:07:48.423749  601809 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:48.423771  601809 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:48.423839  601809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:07:48.423867  601809 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:48.423884  601809 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:48.423920  601809 main.go:141] libmachine: Running pre-create checks...
	I0127 14:07:48.423933  601809 main.go:141] libmachine: (no-preload-183205) Calling .PreCreateCheck
	I0127 14:07:48.424225  601809 main.go:141] libmachine: (no-preload-183205) Calling .GetConfigRaw
	I0127 14:07:48.424620  601809 main.go:141] libmachine: Creating machine...
	I0127 14:07:48.424637  601809 main.go:141] libmachine: (no-preload-183205) Calling .Create
	I0127 14:07:48.424748  601809 main.go:141] libmachine: (no-preload-183205) creating KVM machine...
	I0127 14:07:48.424764  601809 main.go:141] libmachine: (no-preload-183205) creating network...
	I0127 14:07:48.425981  601809 main.go:141] libmachine: (no-preload-183205) DBG | found existing default KVM network
	I0127 14:07:48.427278  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.427123  601857 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:07:48.428512  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.428430  601857 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000320a40}
	I0127 14:07:48.428651  601809 main.go:141] libmachine: (no-preload-183205) DBG | created network xml: 
	I0127 14:07:48.428676  601809 main.go:141] libmachine: (no-preload-183205) DBG | <network>
	I0127 14:07:48.428687  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <name>mk-no-preload-183205</name>
	I0127 14:07:48.428693  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <dns enable='no'/>
	I0127 14:07:48.428702  601809 main.go:141] libmachine: (no-preload-183205) DBG |   
	I0127 14:07:48.428710  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 14:07:48.428719  601809 main.go:141] libmachine: (no-preload-183205) DBG |     <dhcp>
	I0127 14:07:48.428727  601809 main.go:141] libmachine: (no-preload-183205) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 14:07:48.428748  601809 main.go:141] libmachine: (no-preload-183205) DBG |     </dhcp>
	I0127 14:07:48.428755  601809 main.go:141] libmachine: (no-preload-183205) DBG |   </ip>
	I0127 14:07:48.428761  601809 main.go:141] libmachine: (no-preload-183205) DBG |   
	I0127 14:07:48.428767  601809 main.go:141] libmachine: (no-preload-183205) DBG | </network>
	I0127 14:07:48.428775  601809 main.go:141] libmachine: (no-preload-183205) DBG | 
	I0127 14:07:48.438595  601809 main.go:141] libmachine: (no-preload-183205) DBG | trying to create private KVM network mk-no-preload-183205 192.168.50.0/24...
	I0127 14:07:48.520071  601809 main.go:141] libmachine: (no-preload-183205) DBG | private KVM network mk-no-preload-183205 192.168.50.0/24 created
	I0127 14:07:48.520117  601809 main.go:141] libmachine: (no-preload-183205) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 ...
	I0127 14:07:48.520138  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.520044  601857 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:48.520161  601809 main.go:141] libmachine: (no-preload-183205) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:07:48.520188  601809 main.go:141] libmachine: (no-preload-183205) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:07:48.883721  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.883591  601857 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/id_rsa...
	I0127 14:07:49.332540  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.332391  601857 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/no-preload-183205.rawdisk...
	I0127 14:07:49.332585  601809 main.go:141] libmachine: (no-preload-183205) DBG | Writing magic tar header
	I0127 14:07:49.332603  601809 main.go:141] libmachine: (no-preload-183205) DBG | Writing SSH key tar header
	I0127 14:07:49.332616  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.332499  601857 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 ...
	I0127 14:07:49.332632  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205
	I0127 14:07:49.332643  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:07:49.332660  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 (perms=drwx------)
	I0127 14:07:49.332678  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:07:49.332714  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:49.332727  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:07:49.332739  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:07:49.332749  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:07:49.332763  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:07:49.332774  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:07:49.332787  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins
	I0127 14:07:49.332798  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home
	I0127 14:07:49.332806  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:07:49.332817  601809 main.go:141] libmachine: (no-preload-183205) creating domain...
	I0127 14:07:49.332830  601809 main.go:141] libmachine: (no-preload-183205) DBG | skipping /home - not owner
	I0127 14:07:49.334085  601809 main.go:141] libmachine: (no-preload-183205) define libvirt domain using xml: 
	I0127 14:07:49.334116  601809 main.go:141] libmachine: (no-preload-183205) <domain type='kvm'>
	I0127 14:07:49.334154  601809 main.go:141] libmachine: (no-preload-183205)   <name>no-preload-183205</name>
	I0127 14:07:49.334179  601809 main.go:141] libmachine: (no-preload-183205)   <memory unit='MiB'>2200</memory>
	I0127 14:07:49.334193  601809 main.go:141] libmachine: (no-preload-183205)   <vcpu>2</vcpu>
	I0127 14:07:49.334203  601809 main.go:141] libmachine: (no-preload-183205)   <features>
	I0127 14:07:49.334212  601809 main.go:141] libmachine: (no-preload-183205)     <acpi/>
	I0127 14:07:49.334223  601809 main.go:141] libmachine: (no-preload-183205)     <apic/>
	I0127 14:07:49.334231  601809 main.go:141] libmachine: (no-preload-183205)     <pae/>
	I0127 14:07:49.334241  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334266  601809 main.go:141] libmachine: (no-preload-183205)   </features>
	I0127 14:07:49.334283  601809 main.go:141] libmachine: (no-preload-183205)   <cpu mode='host-passthrough'>
	I0127 14:07:49.334292  601809 main.go:141] libmachine: (no-preload-183205)   
	I0127 14:07:49.334299  601809 main.go:141] libmachine: (no-preload-183205)   </cpu>
	I0127 14:07:49.334307  601809 main.go:141] libmachine: (no-preload-183205)   <os>
	I0127 14:07:49.334318  601809 main.go:141] libmachine: (no-preload-183205)     <type>hvm</type>
	I0127 14:07:49.334327  601809 main.go:141] libmachine: (no-preload-183205)     <boot dev='cdrom'/>
	I0127 14:07:49.334336  601809 main.go:141] libmachine: (no-preload-183205)     <boot dev='hd'/>
	I0127 14:07:49.334345  601809 main.go:141] libmachine: (no-preload-183205)     <bootmenu enable='no'/>
	I0127 14:07:49.334354  601809 main.go:141] libmachine: (no-preload-183205)   </os>
	I0127 14:07:49.334383  601809 main.go:141] libmachine: (no-preload-183205)   <devices>
	I0127 14:07:49.334399  601809 main.go:141] libmachine: (no-preload-183205)     <disk type='file' device='cdrom'>
	I0127 14:07:49.334417  601809 main.go:141] libmachine: (no-preload-183205)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/boot2docker.iso'/>
	I0127 14:07:49.334428  601809 main.go:141] libmachine: (no-preload-183205)       <target dev='hdc' bus='scsi'/>
	I0127 14:07:49.334436  601809 main.go:141] libmachine: (no-preload-183205)       <readonly/>
	I0127 14:07:49.334444  601809 main.go:141] libmachine: (no-preload-183205)     </disk>
	I0127 14:07:49.334460  601809 main.go:141] libmachine: (no-preload-183205)     <disk type='file' device='disk'>
	I0127 14:07:49.334478  601809 main.go:141] libmachine: (no-preload-183205)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:07:49.334495  601809 main.go:141] libmachine: (no-preload-183205)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/no-preload-183205.rawdisk'/>
	I0127 14:07:49.334506  601809 main.go:141] libmachine: (no-preload-183205)       <target dev='hda' bus='virtio'/>
	I0127 14:07:49.334517  601809 main.go:141] libmachine: (no-preload-183205)     </disk>
	I0127 14:07:49.334527  601809 main.go:141] libmachine: (no-preload-183205)     <interface type='network'>
	I0127 14:07:49.334537  601809 main.go:141] libmachine: (no-preload-183205)       <source network='mk-no-preload-183205'/>
	I0127 14:07:49.334552  601809 main.go:141] libmachine: (no-preload-183205)       <model type='virtio'/>
	I0127 14:07:49.334561  601809 main.go:141] libmachine: (no-preload-183205)     </interface>
	I0127 14:07:49.334572  601809 main.go:141] libmachine: (no-preload-183205)     <interface type='network'>
	I0127 14:07:49.334581  601809 main.go:141] libmachine: (no-preload-183205)       <source network='default'/>
	I0127 14:07:49.334591  601809 main.go:141] libmachine: (no-preload-183205)       <model type='virtio'/>
	I0127 14:07:49.334599  601809 main.go:141] libmachine: (no-preload-183205)     </interface>
	I0127 14:07:49.334609  601809 main.go:141] libmachine: (no-preload-183205)     <serial type='pty'>
	I0127 14:07:49.334618  601809 main.go:141] libmachine: (no-preload-183205)       <target port='0'/>
	I0127 14:07:49.334628  601809 main.go:141] libmachine: (no-preload-183205)     </serial>
	I0127 14:07:49.334637  601809 main.go:141] libmachine: (no-preload-183205)     <console type='pty'>
	I0127 14:07:49.334644  601809 main.go:141] libmachine: (no-preload-183205)       <target type='serial' port='0'/>
	I0127 14:07:49.334652  601809 main.go:141] libmachine: (no-preload-183205)     </console>
	I0127 14:07:49.334662  601809 main.go:141] libmachine: (no-preload-183205)     <rng model='virtio'>
	I0127 14:07:49.334672  601809 main.go:141] libmachine: (no-preload-183205)       <backend model='random'>/dev/random</backend>
	I0127 14:07:49.334681  601809 main.go:141] libmachine: (no-preload-183205)     </rng>
	I0127 14:07:49.334689  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334702  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334714  601809 main.go:141] libmachine: (no-preload-183205)   </devices>
	I0127 14:07:49.334721  601809 main.go:141] libmachine: (no-preload-183205) </domain>
	I0127 14:07:49.334733  601809 main.go:141] libmachine: (no-preload-183205) 
	I0127 14:07:49.339436  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:55:22:13 in network default
	I0127 14:07:49.340165  601809 main.go:141] libmachine: (no-preload-183205) starting domain...
	I0127 14:07:49.340191  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:20:60:92 in network mk-no-preload-183205
	I0127 14:07:49.340200  601809 main.go:141] libmachine: (no-preload-183205) ensuring networks are active...
	I0127 14:07:49.340971  601809 main.go:141] libmachine: (no-preload-183205) Ensuring network default is active
	I0127 14:07:49.341319  601809 main.go:141] libmachine: (no-preload-183205) Ensuring network mk-no-preload-183205 is active
	I0127 14:07:49.341986  601809 main.go:141] libmachine: (no-preload-183205) getting domain XML...
	I0127 14:07:49.342845  601809 main.go:141] libmachine: (no-preload-183205) creating domain...
	I0127 14:07:49.754647  601809 main.go:141] libmachine: (no-preload-183205) waiting for IP...
	I0127 14:07:49.755802  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:20:60:92 in network mk-no-preload-183205
	I0127 14:07:49.756435  601809 main.go:141] libmachine: (no-preload-183205) DBG | unable to find current IP address of domain no-preload-183205 in network mk-no-preload-183205
	I0127 14:07:49.756518  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.756441  601857 retry.go:31] will retry after 272.533474ms: waiting for domain to come up
	I0127 14:07:49.633836  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:49.794413  601531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:49.967991  601531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:07:49.968073  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.084660  601531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:50.084743  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.131683  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.199059  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.266955  601531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:50.289803  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.308029  601531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.346036  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.378069  601531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:50.396951  601531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:50.417736  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:50.685609  601531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:51.151301  601531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:51.151405  601531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:51.156366  601531 start.go:563] Will wait 60s for crictl version
	I0127 14:07:51.156427  601531 ssh_runner.go:195] Run: which crictl
	I0127 14:07:51.160621  601531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:51.202254  601531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:51.202354  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.232941  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.309473  601531 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:07:48.482045  601373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:07:51.310590  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:51.314164  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.314691  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:51.314723  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.315015  601531 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:51.342772  601531 kubeadm.go:883] updating cluster {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:51.342916  601531 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:51.342980  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.540139  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.540176  601531 crio.go:433] Images already preloaded, skipping extraction
	I0127 14:07:51.540245  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.723140  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.723176  601531 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:07:51.723208  601531 kubeadm.go:934] updating node { 192.168.61.72 8443 v1.32.1 crio true true} ...
	I0127 14:07:51.723370  601531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-966446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:51.723451  601531 ssh_runner.go:195] Run: crio config
	I0127 14:07:51.833408  601531 cni.go:84] Creating CNI manager for ""
	I0127 14:07:51.833430  601531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:51.833440  601531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:51.833472  601531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.72 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-966446 NodeName:pause-966446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:07:51.833663  601531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-966446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:51.833759  601531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:07:51.849643  601531 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:51.849734  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:51.859622  601531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 14:07:51.878933  601531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:51.927875  601531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 14:07:51.946086  601531 ssh_runner.go:195] Run: grep 192.168.61.72	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:51.954227  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:52.111702  601531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:52.136062  601531 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446 for IP: 192.168.61.72
	I0127 14:07:52.136089  601531 certs.go:194] generating shared ca certs ...
	I0127 14:07:52.136111  601531 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:52.136278  601531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:52.136342  601531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:52.136354  601531 certs.go:256] generating profile certs ...
	I0127 14:07:52.136983  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/client.key
	I0127 14:07:52.137115  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key.f1093c80
	I0127 14:07:52.137177  601531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key
	I0127 14:07:52.137354  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:52.137393  601531 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:52.137408  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:52.137445  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:52.137487  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:52.137518  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:52.137570  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:52.139063  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:52.163001  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:52.186084  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:52.208609  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:52.231068  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:07:52.255538  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:52.279172  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:52.304122  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:07:52.327911  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:52.350495  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:52.374464  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:52.413824  601531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:52.429570  601531 ssh_runner.go:195] Run: openssl version
	I0127 14:07:52.435317  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:52.446068  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450652  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450700  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.456430  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:52.466347  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:52.478172  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483080  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483133  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.488944  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:52.498827  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:52.510270  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.514978  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.515019  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.520460  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:52.529770  601531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:52.534208  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:07:52.539664  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:07:52.545155  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:07:52.550674  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:07:52.556058  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:07:52.561391  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:07:52.566877  601531 kubeadm.go:392] StartCluster: {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:52.566970  601531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:52.567004  601531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:52.602509  601531 cri.go:89] found id: "9a4a6873a790179033815b842a490593ca7e247ab4c35927ab123d40b5b1c1b0"
	I0127 14:07:52.602533  601531 cri.go:89] found id: "153475c34d724a00aae02973ec25d6ba069b6798d663e0fb03fdcb678fbf90dc"
	I0127 14:07:52.602539  601531 cri.go:89] found id: "538bc3dc9efa53fa541ba54500003bc5a9f4ecc98ce84f4299f09c6519df409f"
	I0127 14:07:52.602544  601531 cri.go:89] found id: "ddaac33d82a8a7fca412c3f5cce780ba01829a09277d596b2eb83c688aa40627"
	I0127 14:07:52.602548  601531 cri.go:89] found id: "2fff1ca9ed0fb4d432dbddcbfba74d463e908d8e323e8f7da8389d0e159e27eb"
	I0127 14:07:52.602552  601531 cri.go:89] found id: "67099ee481deaf66bccd062bf3bbfde8a62b7a39d5819e92b57acf9ddbb3d637"
	I0127 14:07:52.602556  601531 cri.go:89] found id: ""
	I0127 14:07:52.602593  601531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-966446 -n pause-966446
helpers_test.go:261: (dbg) Run:  kubectl --context pause-966446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-966446 -n pause-966446
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-966446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-966446 logs -n 25: (1.568672059s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo cat                            | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo                                | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo find                           | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-418372 sudo crio                           | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-418372                                     | cilium-418372          | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -p stopped-upgrade-736772                            | minikube               | jenkins | v1.26.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	| ssh     | -p NoKubernetes-412983 sudo                          | NoKubernetes-412983    | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | systemctl is-active --quiet                          |                        |         |         |                     |                     |
	|         | service kubelet                                      |                        |         |         |                     |                     |
	| delete  | -p NoKubernetes-412983                               | NoKubernetes-412983    | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -p pause-966446 --memory=2048                        | pause-966446           | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:07 UTC |
	|         | --install-addons=false                               |                        |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| stop    | stopped-upgrade-736772 stop                          | minikube               | jenkins | v1.26.0 | 27 Jan 25 14:06 UTC | 27 Jan 25 14:06 UTC |
	| start   | -p stopped-upgrade-736772                            | stopped-upgrade-736772 | jenkins | v1.35.0 | 27 Jan 25 14:06 UTC | 27 Jan 25 14:07 UTC |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p stopped-upgrade-736772                            | stopped-upgrade-736772 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	| start   | -p cert-expiration-335486                            | cert-expiration-335486 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	|         | --memory=2048                                        |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-456130                            | old-k8s-version-456130 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	| start   | -p pause-966446                                      | pause-966446           | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:08 UTC |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-335486                            | cert-expiration-335486 | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC | 27 Jan 25 14:07 UTC |
	| start   | -p no-preload-183205                                 | no-preload-183205      | jenkins | v1.35.0 | 27 Jan 25 14:07 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:07:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:07:39.995138  601809 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:07:39.995253  601809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:39.995265  601809 out.go:358] Setting ErrFile to fd 2...
	I0127 14:07:39.995271  601809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:07:39.995477  601809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:07:39.996057  601809 out.go:352] Setting JSON to false
	I0127 14:07:39.997072  601809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17405,"bootTime":1737969455,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:07:39.997182  601809 start.go:139] virtualization: kvm guest
	I0127 14:07:39.998902  601809 out.go:177] * [no-preload-183205] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:07:40.000550  601809 notify.go:220] Checking for updates...
	I0127 14:07:40.000559  601809 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:07:40.001745  601809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:07:40.002922  601809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:07:40.004219  601809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:40.005491  601809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:07:40.006808  601809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:07:40.008634  601809 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:40.008824  601809 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:40.009034  601809 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:40.009161  601809 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:07:40.050865  601809 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:07:40.052008  601809 start.go:297] selected driver: kvm2
	I0127 14:07:40.052029  601809 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:07:40.052044  601809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:07:40.053050  601809 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.053145  601809 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:07:40.069538  601809 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:07:40.069633  601809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:07:40.069954  601809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:40.070033  601809 cni.go:84] Creating CNI manager for ""
	I0127 14:07:40.070116  601809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:40.070128  601809 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:07:40.070206  601809 start.go:340] cluster config:
	{Name:no-preload-183205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-183205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0127 14:07:40.070401  601809 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.071793  601809 out.go:177] * Starting "no-preload-183205" primary control-plane node in "no-preload-183205" cluster
	I0127 14:07:38.454864  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-456130
	
	I0127 14:07:38.454932  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.457742  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458173  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.458207  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.458350  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.458587  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458762  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.458927  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.459102  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.459311  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.459349  601373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-456130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-456130/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-456130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:38.585645  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:38.585683  601373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:38.585741  601373 buildroot.go:174] setting up certificates
	I0127 14:07:38.585755  601373 provision.go:84] configureAuth start
	I0127 14:07:38.585772  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:07:38.586102  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:38.589345  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.589823  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.589991  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.592421  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.592828  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.592860  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.593007  601373 provision.go:143] copyHostCerts
	I0127 14:07:38.593064  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:38.593091  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:38.593170  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:38.593347  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:38.593362  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:38.593392  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:38.593472  601373 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:38.593481  601373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:38.593503  601373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:38.593570  601373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-456130 san=[127.0.0.1 192.168.39.11 localhost minikube old-k8s-version-456130]
	I0127 14:07:38.768898  601373 provision.go:177] copyRemoteCerts
	I0127 14:07:38.768964  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:38.768999  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.771730  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772083  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.772124  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.772282  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.772477  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.772635  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.772784  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:38.859870  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:38.885052  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 14:07:38.911635  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:07:38.935458  601373 provision.go:87] duration metric: took 349.687848ms to configureAuth
	I0127 14:07:38.935490  601373 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:38.935724  601373 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:07:38.935827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:38.939100  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939413  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:38.939445  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:38.939604  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:38.939827  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940036  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:38.940197  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:38.940380  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:38.940629  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:38.940652  601373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:39.198836  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:39.198866  601373 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:07:39.198874  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetURL
	I0127 14:07:39.200067  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | using libvirt version 6000000
	I0127 14:07:39.203833  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.204766  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.204793  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.205007  601373 main.go:141] libmachine: Docker is up and running!
	I0127 14:07:39.205024  601373 main.go:141] libmachine: Reticulating splines...
	I0127 14:07:39.205031  601373 client.go:171] duration metric: took 24.938263372s to LocalClient.Create
	I0127 14:07:39.205058  601373 start.go:167] duration metric: took 24.938330128s to libmachine.API.Create "old-k8s-version-456130"
	I0127 14:07:39.205072  601373 start.go:293] postStartSetup for "old-k8s-version-456130" (driver="kvm2")
	I0127 14:07:39.205093  601373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:39.205118  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.205374  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:39.205407  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.210121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212293  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.212324  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.212592  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.212757  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.212942  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.213088  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.300676  601373 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:39.305063  601373 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:39.305089  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:39.305171  601373 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:39.305268  601373 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:39.305392  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:39.316817  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:39.342960  601373 start.go:296] duration metric: took 137.875244ms for postStartSetup
	I0127 14:07:39.343015  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:07:39.343611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.753533  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.753907  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.753930  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.754271  601373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:07:39.754483  601373 start.go:128] duration metric: took 25.508299796s to createHost
	I0127 14:07:39.754518  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.756915  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757237  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.757272  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.757400  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.757611  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757779  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.757926  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.758089  601373 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.758248  601373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:07:39.758258  601373 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:39.879057  601373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986859.855229643
	
	I0127 14:07:39.879079  601373 fix.go:216] guest clock: 1737986859.855229643
	I0127 14:07:39.879088  601373 fix.go:229] Guest: 2025-01-27 14:07:39.855229643 +0000 UTC Remote: 2025-01-27 14:07:39.75450005 +0000 UTC m=+31.428265457 (delta=100.729593ms)
	I0127 14:07:39.879122  601373 fix.go:200] guest clock delta is within tolerance: 100.729593ms
	I0127 14:07:39.879129  601373 start.go:83] releasing machines lock for "old-k8s-version-456130", held for 25.633123341s
	I0127 14:07:39.879156  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.879419  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:39.882266  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882753  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.882778  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.882967  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883551  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883743  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:07:39.883842  601373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:39.883882  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.884110  601373 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:39.884136  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:07:39.886654  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887060  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887121  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887145  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887321  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.887480  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.887648  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:39.887663  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.887669  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:39.887828  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.887853  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:07:39.888019  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:07:39.888172  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:07:39.888306  601373 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:07:39.974593  601373 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:39.998185  601373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:40.159948  601373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:40.166159  601373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:40.166229  601373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:40.185635  601373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:07:40.185657  601373 start.go:495] detecting cgroup driver to use...
	I0127 14:07:40.185727  601373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:40.204886  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:40.218758  601373 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:40.218813  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:40.234338  601373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:40.249194  601373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:40.405723  601373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:40.561717  601373 docker.go:233] disabling docker service ...
	I0127 14:07:40.561787  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:40.577711  601373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:40.593087  601373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:40.765539  601373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:40.900954  601373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:40.915793  601373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:40.935250  601373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 14:07:40.935316  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.945849  601373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:40.945907  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.955796  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.965535  601373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:40.975655  601373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:40.985983  601373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:40.995087  601373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:07:40.995142  601373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:07:41.007442  601373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:41.018580  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:41.150827  601373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:41.235346  601373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:41.235426  601373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:41.239989  601373 start.go:563] Will wait 60s for crictl version
	I0127 14:07:41.240037  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:41.243750  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:41.280633  601373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:41.280709  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.312743  601373 ssh_runner.go:195] Run: crio --version
	I0127 14:07:41.342444  601373 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 14:07:41.343595  601373 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:07:41.346163  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346587  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:07:28 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:07:41.346619  601373 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:07:41.346796  601373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:41.351141  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:41.363722  601373 kubeadm.go:883] updating cluster {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:41.363830  601373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:07:41.363893  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:41.394760  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:41.394820  601373 ssh_runner.go:195] Run: which lz4
	I0127 14:07:41.398404  601373 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:07:41.402316  601373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:07:41.402348  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 14:07:43.199494  601373 crio.go:462] duration metric: took 1.801104328s to copy over tarball
	I0127 14:07:43.199572  601373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:07:39.906911  601531 machine.go:93] provisionDockerMachine start ...
	I0127 14:07:39.906949  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:39.907481  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:39.910325  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910762  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:39.910797  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:39.910950  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:39.911119  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911295  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:39.911446  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:39.911572  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:39.911826  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:39.911845  601531 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:07:40.027037  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.027073  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027344  601531 buildroot.go:166] provisioning hostname "pause-966446"
	I0127 14:07:40.027375  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.027550  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.030738  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031193  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.031218  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.031433  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.031655  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031841  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.031991  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.032158  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.032374  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.032387  601531 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-966446 && echo "pause-966446" | sudo tee /etc/hostname
	I0127 14:07:40.166642  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966446
	
	I0127 14:07:40.166671  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.170024  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170512  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.170565  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.170778  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.170976  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171116  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.171271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.171432  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.171606  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.171624  601531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-966446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-966446/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-966446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:07:40.292064  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:07:40.292093  601531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:07:40.292114  601531 buildroot.go:174] setting up certificates
	I0127 14:07:40.292125  601531 provision.go:84] configureAuth start
	I0127 14:07:40.292139  601531 main.go:141] libmachine: (pause-966446) Calling .GetMachineName
	I0127 14:07:40.292445  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:40.295453  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.295895  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.295941  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.296050  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.298488  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.298935  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.298963  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.299181  601531 provision.go:143] copyHostCerts
	I0127 14:07:40.299250  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:07:40.299282  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:07:40.299362  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:07:40.299525  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:07:40.299542  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:07:40.299583  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:07:40.299703  601531 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:07:40.299718  601531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:07:40.299754  601531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:07:40.299869  601531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.pause-966446 san=[127.0.0.1 192.168.61.72 localhost minikube pause-966446]
	I0127 14:07:40.473785  601531 provision.go:177] copyRemoteCerts
	I0127 14:07:40.473854  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:07:40.473891  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.476480  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.476874  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.476904  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.477238  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.477436  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.477660  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.477835  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:40.577510  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:07:40.605346  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0127 14:07:40.635897  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:07:40.662885  601531 provision.go:87] duration metric: took 370.74521ms to configureAuth
	I0127 14:07:40.662909  601531 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:07:40.663150  601531 config.go:182] Loaded profile config "pause-966446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:07:40.663247  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:40.666176  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666572  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:40.666607  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:40.666906  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:40.667096  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667280  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:40.667426  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:40.667580  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:40.667771  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:40.667787  601531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:07:40.072821  601809 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:40.073002  601809 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/config.json ...
	I0127 14:07:40.073044  601809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/config.json: {Name:mka9c8ee9958e3f7ec7463281626fe1e3efb5598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:40.073113  601809 cache.go:107] acquiring lock: {Name:mk66b4f28a03faaae643efe520674fad2917cdda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073136  601809 cache.go:107] acquiring lock: {Name:mk6fbc282aded7ec6720a3c60ca5a3553bfd9648 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073133  601809 cache.go:107] acquiring lock: {Name:mk36c363b77b19af873b7dba68e6372e01e796ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073221  601809 start.go:360] acquireMachinesLock for no-preload-183205: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:07:40.073270  601809 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 14:07:40.073158  601809 cache.go:107] acquiring lock: {Name:mk5c6e88180d8da47162934c7e3e1802d2b17603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073299  601809 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 14:07:40.073281  601809 cache.go:115] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 14:07:40.073343  601809 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 223.584µs
	I0127 14:07:40.073354  601809 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 14:07:40.073365  601809 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 14:07:40.073481  601809 cache.go:107] acquiring lock: {Name:mk43dc5afe3fb66354ecfbaac283409e7be87f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073593  601809 cache.go:107] acquiring lock: {Name:mk8e22d7888ff554b79f22bad43b84267c64f3cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073655  601809 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 14:07:40.073639  601809 cache.go:107] acquiring lock: {Name:mkb3dbf54b3c350f3252e35e2756d0e31b75ee20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073716  601809 cache.go:107] acquiring lock: {Name:mk27af2a77b4a1751a1c6ee4547349937489ce95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:07:40.073760  601809 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 14:07:40.073805  601809 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 14:07:40.073894  601809 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 14:07:40.074857  601809 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 14:07:40.074870  601809 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 14:07:40.074866  601809 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 14:07:40.074894  601809 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 14:07:40.074899  601809 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 14:07:40.074882  601809 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 14:07:40.074968  601809 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 14:07:40.245790  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 14:07:40.248406  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 14:07:40.251949  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 14:07:40.252149  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 14:07:40.259010  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 14:07:40.268652  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 14:07:40.275859  601809 cache.go:162] opening:  /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0127 14:07:40.354223  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 14:07:40.354255  601809 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 280.685672ms
	I0127 14:07:40.354270  601809 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 14:07:40.775887  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 14:07:40.775913  601809 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 702.785131ms
	I0127 14:07:40.775927  601809 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 14:07:41.770463  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 14:07:41.770497  601809 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 1.696827747s
	I0127 14:07:41.770513  601809 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 14:07:41.809456  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 14:07:41.809491  601809 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.736012385s
	I0127 14:07:41.809507  601809 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 14:07:41.900279  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 14:07:41.900316  601809 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 1.827215848s
	I0127 14:07:41.900332  601809 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 14:07:41.918472  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 14:07:41.918505  601809 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 1.845351664s
	I0127 14:07:41.918520  601809 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 14:07:42.242561  601809 cache.go:157] /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 14:07:42.242597  601809 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.169067471s
	I0127 14:07:42.242613  601809 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 14:07:42.242636  601809 cache.go:87] Successfully saved all images to host disk.
	I0127 14:07:45.681236  601373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.481625055s)
	I0127 14:07:45.681272  601373 crio.go:469] duration metric: took 2.481746403s to extract the tarball
	I0127 14:07:45.681283  601373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:07:45.723404  601373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:45.766291  601373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:07:45.766315  601373 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 14:07:45.766388  601373 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.766433  601373 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.766461  601373 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.766492  601373 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.766533  601373 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 14:07:45.766531  601373 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.766468  601373 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.766411  601373 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.767945  601373 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.767990  601373 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 14:07:45.767960  601373 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:45.768071  601373 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.767958  601373 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.767963  601373 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.767961  601373 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.921712  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:45.928344  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:45.928604  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:45.933554  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:45.933934  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:45.938018  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 14:07:45.983763  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 14:07:46.033999  601373 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 14:07:46.034053  601373 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.034110  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068009  601373 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 14:07:46.068059  601373 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.068054  601373 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 14:07:46.068098  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.068110  601373 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.068153  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104861  601373 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 14:07:46.104892  601373 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 14:07:46.104913  601373 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.104924  601373 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.104953  601373 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 14:07:46.104964  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104980  601373 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.105007  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.104962  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110692  601373 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 14:07:46.110724  601373 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 14:07:46.110749  601373 ssh_runner.go:195] Run: which crictl
	I0127 14:07:46.110774  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.110698  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.110853  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.118171  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.118213  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.118271  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.133826  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.248375  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.248407  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.259675  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.259775  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.285086  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.285190  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.297983  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.375150  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:07:46.406972  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:07:46.407097  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:07:46.407118  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:07:46.429281  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:07:46.441476  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:07:46.441554  601373 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:07:46.519114  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 14:07:46.566524  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 14:07:46.566546  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 14:07:46.566643  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 14:07:46.584390  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 14:07:46.585274  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 14:07:46.585389  601373 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 14:07:46.674534  601373 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:07:46.815091  601373 cache_images.go:92] duration metric: took 1.048759178s to LoadCachedImages
	W0127 14:07:46.815206  601373 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0127 14:07:46.815228  601373 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.20.0 crio true true} ...
	I0127 14:07:46.815358  601373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-456130 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:46.815423  601373 ssh_runner.go:195] Run: crio config
	I0127 14:07:46.874094  601373 cni.go:84] Creating CNI manager for ""
	I0127 14:07:46.874116  601373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:46.874125  601373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:46.874148  601373 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-456130 NodeName:old-k8s-version-456130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 14:07:46.874318  601373 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-456130"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:46.874398  601373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 14:07:46.884483  601373 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:46.884548  601373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:46.893923  601373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 14:07:46.910086  601373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:46.926183  601373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 14:07:46.942181  601373 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:46.945997  601373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:07:46.957628  601373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:47.083251  601373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:47.099548  601373 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130 for IP: 192.168.39.11
	I0127 14:07:47.099571  601373 certs.go:194] generating shared ca certs ...
	I0127 14:07:47.099620  601373 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.099825  601373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:47.099872  601373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:47.099883  601373 certs.go:256] generating profile certs ...
	I0127 14:07:47.099941  601373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key
	I0127 14:07:47.099966  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt with IP's: []
	I0127 14:07:47.231224  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt ...
	I0127 14:07:47.231255  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.crt: {Name:mk2195be2553687d06225303e1e64a924b7177d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.231412  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key ...
	I0127 14:07:47.231425  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key: {Name:mk5eae8d9e14b45dbe6c6e0f3c3649d5f4445d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.261333  601373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a
	I0127 14:07:47.261392  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I0127 14:07:47.431351  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a ...
	I0127 14:07:47.431380  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a: {Name:mkcee3647454c013eeabdf2b71abfeb33a090099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436354  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a ...
	I0127 14:07:47.436381  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a: {Name:mk7ade60d44a5e93338e4cd40c9a2ac34565f282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.436491  601373 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt
	I0127 14:07:47.436583  601373 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key
	I0127 14:07:47.436654  601373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key
	I0127 14:07:47.436674  601373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt with IP's: []
	I0127 14:07:47.602017  601373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt ...
	I0127 14:07:47.602048  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt: {Name:mkdc8c889c4adb19570ac53e2a3880c16e79ab20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602204  601373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key ...
	I0127 14:07:47.602217  601373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key: {Name:mk1e4f6a3159570dde8e09b032b2a9e14d0b7aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:47.602383  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:47.602419  601373 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:47.602429  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:47.602450  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:47.602472  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:47.602492  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:47.602527  601373 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:47.603059  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:47.629391  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:47.653289  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:47.676831  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:47.700476  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 14:07:47.730298  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:47.756643  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:47.780594  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:07:47.804880  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:47.827867  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:47.851041  601373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:47.875254  601373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:47.891763  601373 ssh_runner.go:195] Run: openssl version
	I0127 14:07:47.897606  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:47.908349  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.912980  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.913025  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:47.919121  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:47.929695  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:47.943397  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948357  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.948408  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:47.954207  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:47.966803  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:47.979856  601373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984586  601373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.984628  601373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:47.994192  601373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:48.014136  601373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:48.021745  601373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:07:48.021812  601373 kubeadm.go:392] StartCluster: {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:48.021934  601373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:48.021983  601373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:48.068726  601373 cri.go:89] found id: ""
	I0127 14:07:48.068811  601373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:07:48.079370  601373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:07:48.092372  601373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:07:48.105607  601373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:07:48.105625  601373 kubeadm.go:157] found existing configuration files:
	
	I0127 14:07:48.105664  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:07:48.118318  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:07:48.118379  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:07:48.128022  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:07:48.137623  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:07:48.137689  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:07:48.149172  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.161255  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:07:48.161297  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:07:48.176810  601373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:07:48.185831  601373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:07:48.185885  601373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:07:48.195275  601373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:07:48.398667  601809 start.go:364] duration metric: took 8.325386996s to acquireMachinesLock for "no-preload-183205"
	I0127 14:07:48.398731  601809 start.go:93] Provisioning new machine with config: &{Name:no-preload-183205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-183205
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:07:48.398906  601809 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:07:48.140390  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:07:48.140421  601531 machine.go:96] duration metric: took 8.233480321s to provisionDockerMachine
	I0127 14:07:48.140437  601531 start.go:293] postStartSetup for "pause-966446" (driver="kvm2")
	I0127 14:07:48.140450  601531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:07:48.140499  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.140860  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:07:48.140908  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.143436  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143789  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.143817  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.143998  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.144214  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.144403  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.144558  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.236307  601531 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:07:48.241621  601531 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:07:48.241645  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:07:48.241695  601531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:07:48.241772  601531 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:07:48.241852  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:07:48.253943  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:48.280295  601531 start.go:296] duration metric: took 139.846011ms for postStartSetup
	I0127 14:07:48.280327  601531 fix.go:56] duration metric: took 8.401009659s for fixHost
	I0127 14:07:48.280349  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.283269  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283690  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.283721  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.283910  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.284109  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284271  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.284416  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.284557  601531 main.go:141] libmachine: Using SSH client type: native
	I0127 14:07:48.284780  601531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.72 22 <nil> <nil>}
	I0127 14:07:48.284791  601531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:07:48.398447  601531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986868.355909952
	
	I0127 14:07:48.398480  601531 fix.go:216] guest clock: 1737986868.355909952
	I0127 14:07:48.398491  601531 fix.go:229] Guest: 2025-01-27 14:07:48.355909952 +0000 UTC Remote: 2025-01-27 14:07:48.28033142 +0000 UTC m=+28.896632167 (delta=75.578532ms)
	I0127 14:07:48.398520  601531 fix.go:200] guest clock delta is within tolerance: 75.578532ms
	I0127 14:07:48.398527  601531 start.go:83] releasing machines lock for "pause-966446", held for 8.519261631s
	I0127 14:07:48.398569  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.398896  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:48.402171  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402618  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.402667  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.402940  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403483  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403689  601531 main.go:141] libmachine: (pause-966446) Calling .DriverName
	I0127 14:07:48.403796  601531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:07:48.403843  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.403901  601531 ssh_runner.go:195] Run: cat /version.json
	I0127 14:07:48.403928  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHHostname
	I0127 14:07:48.406939  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407341  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407407  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407482  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.407667  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.407898  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.407938  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:48.407970  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:48.408115  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHPort
	I0127 14:07:48.408274  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHKeyPath
	I0127 14:07:48.408278  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408459  601531 main.go:141] libmachine: (pause-966446) Calling .GetSSHUsername
	I0127 14:07:48.408473  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.408626  601531 sshutil.go:53] new ssh client: &{IP:192.168.61.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/pause-966446/id_rsa Username:docker}
	I0127 14:07:48.524775  601531 ssh_runner.go:195] Run: systemctl --version
	I0127 14:07:48.532020  601531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:07:48.694345  601531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:07:48.704000  601531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:07:48.704077  601531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:07:48.719041  601531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 14:07:48.719064  601531 start.go:495] detecting cgroup driver to use...
	I0127 14:07:48.719143  601531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:07:48.742423  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:07:48.761918  601531 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:07:48.761979  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:07:48.777294  601531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:07:48.792034  601531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:07:48.954341  601531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:07:49.087497  601531 docker.go:233] disabling docker service ...
	I0127 14:07:49.087581  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:07:49.105330  601531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:07:49.119089  601531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:07:49.297287  601531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:07:48.400546  601809 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 14:07:48.400763  601809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:07:48.400808  601809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:07:48.421986  601809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0127 14:07:48.422419  601809 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:07:48.422927  601809 main.go:141] libmachine: Using API Version  1
	I0127 14:07:48.422948  601809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:07:48.423288  601809 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:07:48.423451  601809 main.go:141] libmachine: (no-preload-183205) Calling .GetMachineName
	I0127 14:07:48.423545  601809 main.go:141] libmachine: (no-preload-183205) Calling .DriverName
	I0127 14:07:48.423643  601809 start.go:159] libmachine.API.Create for "no-preload-183205" (driver="kvm2")
	I0127 14:07:48.423674  601809 client.go:168] LocalClient.Create starting
	I0127 14:07:48.423709  601809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:07:48.423749  601809 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:48.423771  601809 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:48.423839  601809 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:07:48.423867  601809 main.go:141] libmachine: Decoding PEM data...
	I0127 14:07:48.423884  601809 main.go:141] libmachine: Parsing certificate...
	I0127 14:07:48.423920  601809 main.go:141] libmachine: Running pre-create checks...
	I0127 14:07:48.423933  601809 main.go:141] libmachine: (no-preload-183205) Calling .PreCreateCheck
	I0127 14:07:48.424225  601809 main.go:141] libmachine: (no-preload-183205) Calling .GetConfigRaw
	I0127 14:07:48.424620  601809 main.go:141] libmachine: Creating machine...
	I0127 14:07:48.424637  601809 main.go:141] libmachine: (no-preload-183205) Calling .Create
	I0127 14:07:48.424748  601809 main.go:141] libmachine: (no-preload-183205) creating KVM machine...
	I0127 14:07:48.424764  601809 main.go:141] libmachine: (no-preload-183205) creating network...
	I0127 14:07:48.425981  601809 main.go:141] libmachine: (no-preload-183205) DBG | found existing default KVM network
	I0127 14:07:48.427278  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.427123  601857 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:07:48.428512  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.428430  601857 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000320a40}
	I0127 14:07:48.428651  601809 main.go:141] libmachine: (no-preload-183205) DBG | created network xml: 
	I0127 14:07:48.428676  601809 main.go:141] libmachine: (no-preload-183205) DBG | <network>
	I0127 14:07:48.428687  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <name>mk-no-preload-183205</name>
	I0127 14:07:48.428693  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <dns enable='no'/>
	I0127 14:07:48.428702  601809 main.go:141] libmachine: (no-preload-183205) DBG |   
	I0127 14:07:48.428710  601809 main.go:141] libmachine: (no-preload-183205) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 14:07:48.428719  601809 main.go:141] libmachine: (no-preload-183205) DBG |     <dhcp>
	I0127 14:07:48.428727  601809 main.go:141] libmachine: (no-preload-183205) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 14:07:48.428748  601809 main.go:141] libmachine: (no-preload-183205) DBG |     </dhcp>
	I0127 14:07:48.428755  601809 main.go:141] libmachine: (no-preload-183205) DBG |   </ip>
	I0127 14:07:48.428761  601809 main.go:141] libmachine: (no-preload-183205) DBG |   
	I0127 14:07:48.428767  601809 main.go:141] libmachine: (no-preload-183205) DBG | </network>
	I0127 14:07:48.428775  601809 main.go:141] libmachine: (no-preload-183205) DBG | 
	I0127 14:07:48.438595  601809 main.go:141] libmachine: (no-preload-183205) DBG | trying to create private KVM network mk-no-preload-183205 192.168.50.0/24...
	I0127 14:07:48.520071  601809 main.go:141] libmachine: (no-preload-183205) DBG | private KVM network mk-no-preload-183205 192.168.50.0/24 created
	I0127 14:07:48.520117  601809 main.go:141] libmachine: (no-preload-183205) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 ...
	I0127 14:07:48.520138  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.520044  601857 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:48.520161  601809 main.go:141] libmachine: (no-preload-183205) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:07:48.520188  601809 main.go:141] libmachine: (no-preload-183205) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:07:48.883721  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:48.883591  601857 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/id_rsa...
	I0127 14:07:49.332540  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.332391  601857 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/no-preload-183205.rawdisk...
	I0127 14:07:49.332585  601809 main.go:141] libmachine: (no-preload-183205) DBG | Writing magic tar header
	I0127 14:07:49.332603  601809 main.go:141] libmachine: (no-preload-183205) DBG | Writing SSH key tar header
	I0127 14:07:49.332616  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.332499  601857 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 ...
	I0127 14:07:49.332632  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205
	I0127 14:07:49.332643  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:07:49.332660  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205 (perms=drwx------)
	I0127 14:07:49.332678  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:07:49.332714  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:07:49.332727  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:07:49.332739  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:07:49.332749  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:07:49.332763  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:07:49.332774  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:07:49.332787  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home/jenkins
	I0127 14:07:49.332798  601809 main.go:141] libmachine: (no-preload-183205) DBG | checking permissions on dir: /home
	I0127 14:07:49.332806  601809 main.go:141] libmachine: (no-preload-183205) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:07:49.332817  601809 main.go:141] libmachine: (no-preload-183205) creating domain...
	I0127 14:07:49.332830  601809 main.go:141] libmachine: (no-preload-183205) DBG | skipping /home - not owner
	I0127 14:07:49.334085  601809 main.go:141] libmachine: (no-preload-183205) define libvirt domain using xml: 
	I0127 14:07:49.334116  601809 main.go:141] libmachine: (no-preload-183205) <domain type='kvm'>
	I0127 14:07:49.334154  601809 main.go:141] libmachine: (no-preload-183205)   <name>no-preload-183205</name>
	I0127 14:07:49.334179  601809 main.go:141] libmachine: (no-preload-183205)   <memory unit='MiB'>2200</memory>
	I0127 14:07:49.334193  601809 main.go:141] libmachine: (no-preload-183205)   <vcpu>2</vcpu>
	I0127 14:07:49.334203  601809 main.go:141] libmachine: (no-preload-183205)   <features>
	I0127 14:07:49.334212  601809 main.go:141] libmachine: (no-preload-183205)     <acpi/>
	I0127 14:07:49.334223  601809 main.go:141] libmachine: (no-preload-183205)     <apic/>
	I0127 14:07:49.334231  601809 main.go:141] libmachine: (no-preload-183205)     <pae/>
	I0127 14:07:49.334241  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334266  601809 main.go:141] libmachine: (no-preload-183205)   </features>
	I0127 14:07:49.334283  601809 main.go:141] libmachine: (no-preload-183205)   <cpu mode='host-passthrough'>
	I0127 14:07:49.334292  601809 main.go:141] libmachine: (no-preload-183205)   
	I0127 14:07:49.334299  601809 main.go:141] libmachine: (no-preload-183205)   </cpu>
	I0127 14:07:49.334307  601809 main.go:141] libmachine: (no-preload-183205)   <os>
	I0127 14:07:49.334318  601809 main.go:141] libmachine: (no-preload-183205)     <type>hvm</type>
	I0127 14:07:49.334327  601809 main.go:141] libmachine: (no-preload-183205)     <boot dev='cdrom'/>
	I0127 14:07:49.334336  601809 main.go:141] libmachine: (no-preload-183205)     <boot dev='hd'/>
	I0127 14:07:49.334345  601809 main.go:141] libmachine: (no-preload-183205)     <bootmenu enable='no'/>
	I0127 14:07:49.334354  601809 main.go:141] libmachine: (no-preload-183205)   </os>
	I0127 14:07:49.334383  601809 main.go:141] libmachine: (no-preload-183205)   <devices>
	I0127 14:07:49.334399  601809 main.go:141] libmachine: (no-preload-183205)     <disk type='file' device='cdrom'>
	I0127 14:07:49.334417  601809 main.go:141] libmachine: (no-preload-183205)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/boot2docker.iso'/>
	I0127 14:07:49.334428  601809 main.go:141] libmachine: (no-preload-183205)       <target dev='hdc' bus='scsi'/>
	I0127 14:07:49.334436  601809 main.go:141] libmachine: (no-preload-183205)       <readonly/>
	I0127 14:07:49.334444  601809 main.go:141] libmachine: (no-preload-183205)     </disk>
	I0127 14:07:49.334460  601809 main.go:141] libmachine: (no-preload-183205)     <disk type='file' device='disk'>
	I0127 14:07:49.334478  601809 main.go:141] libmachine: (no-preload-183205)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:07:49.334495  601809 main.go:141] libmachine: (no-preload-183205)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/no-preload-183205/no-preload-183205.rawdisk'/>
	I0127 14:07:49.334506  601809 main.go:141] libmachine: (no-preload-183205)       <target dev='hda' bus='virtio'/>
	I0127 14:07:49.334517  601809 main.go:141] libmachine: (no-preload-183205)     </disk>
	I0127 14:07:49.334527  601809 main.go:141] libmachine: (no-preload-183205)     <interface type='network'>
	I0127 14:07:49.334537  601809 main.go:141] libmachine: (no-preload-183205)       <source network='mk-no-preload-183205'/>
	I0127 14:07:49.334552  601809 main.go:141] libmachine: (no-preload-183205)       <model type='virtio'/>
	I0127 14:07:49.334561  601809 main.go:141] libmachine: (no-preload-183205)     </interface>
	I0127 14:07:49.334572  601809 main.go:141] libmachine: (no-preload-183205)     <interface type='network'>
	I0127 14:07:49.334581  601809 main.go:141] libmachine: (no-preload-183205)       <source network='default'/>
	I0127 14:07:49.334591  601809 main.go:141] libmachine: (no-preload-183205)       <model type='virtio'/>
	I0127 14:07:49.334599  601809 main.go:141] libmachine: (no-preload-183205)     </interface>
	I0127 14:07:49.334609  601809 main.go:141] libmachine: (no-preload-183205)     <serial type='pty'>
	I0127 14:07:49.334618  601809 main.go:141] libmachine: (no-preload-183205)       <target port='0'/>
	I0127 14:07:49.334628  601809 main.go:141] libmachine: (no-preload-183205)     </serial>
	I0127 14:07:49.334637  601809 main.go:141] libmachine: (no-preload-183205)     <console type='pty'>
	I0127 14:07:49.334644  601809 main.go:141] libmachine: (no-preload-183205)       <target type='serial' port='0'/>
	I0127 14:07:49.334652  601809 main.go:141] libmachine: (no-preload-183205)     </console>
	I0127 14:07:49.334662  601809 main.go:141] libmachine: (no-preload-183205)     <rng model='virtio'>
	I0127 14:07:49.334672  601809 main.go:141] libmachine: (no-preload-183205)       <backend model='random'>/dev/random</backend>
	I0127 14:07:49.334681  601809 main.go:141] libmachine: (no-preload-183205)     </rng>
	I0127 14:07:49.334689  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334702  601809 main.go:141] libmachine: (no-preload-183205)     
	I0127 14:07:49.334714  601809 main.go:141] libmachine: (no-preload-183205)   </devices>
	I0127 14:07:49.334721  601809 main.go:141] libmachine: (no-preload-183205) </domain>
	I0127 14:07:49.334733  601809 main.go:141] libmachine: (no-preload-183205) 
	I0127 14:07:49.339436  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:55:22:13 in network default
	I0127 14:07:49.340165  601809 main.go:141] libmachine: (no-preload-183205) starting domain...
	I0127 14:07:49.340191  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:20:60:92 in network mk-no-preload-183205
	I0127 14:07:49.340200  601809 main.go:141] libmachine: (no-preload-183205) ensuring networks are active...
	I0127 14:07:49.340971  601809 main.go:141] libmachine: (no-preload-183205) Ensuring network default is active
	I0127 14:07:49.341319  601809 main.go:141] libmachine: (no-preload-183205) Ensuring network mk-no-preload-183205 is active
	I0127 14:07:49.341986  601809 main.go:141] libmachine: (no-preload-183205) getting domain XML...
	I0127 14:07:49.342845  601809 main.go:141] libmachine: (no-preload-183205) creating domain...
	I0127 14:07:49.754647  601809 main.go:141] libmachine: (no-preload-183205) waiting for IP...
	I0127 14:07:49.755802  601809 main.go:141] libmachine: (no-preload-183205) DBG | domain no-preload-183205 has defined MAC address 52:54:00:20:60:92 in network mk-no-preload-183205
	I0127 14:07:49.756435  601809 main.go:141] libmachine: (no-preload-183205) DBG | unable to find current IP address of domain no-preload-183205 in network mk-no-preload-183205
	I0127 14:07:49.756518  601809 main.go:141] libmachine: (no-preload-183205) DBG | I0127 14:07:49.756441  601857 retry.go:31] will retry after 272.533474ms: waiting for domain to come up
	I0127 14:07:49.633836  601531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:07:49.794413  601531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:07:49.967991  601531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:07:49.968073  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.084660  601531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:07:50.084743  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.131683  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.199059  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.266955  601531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:07:50.289803  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.308029  601531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.346036  601531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:07:50.378069  601531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:07:50.396951  601531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:07:50.417736  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:50.685609  601531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:07:51.151301  601531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:07:51.151405  601531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:07:51.156366  601531 start.go:563] Will wait 60s for crictl version
	I0127 14:07:51.156427  601531 ssh_runner.go:195] Run: which crictl
	I0127 14:07:51.160621  601531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:07:51.202254  601531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:07:51.202354  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.232941  601531 ssh_runner.go:195] Run: crio --version
	I0127 14:07:51.309473  601531 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:07:48.482045  601373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:07:51.310590  601531 main.go:141] libmachine: (pause-966446) Calling .GetIP
	I0127 14:07:51.314164  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.314691  601531 main.go:141] libmachine: (pause-966446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:2c:b1", ip: ""} in network mk-pause-966446: {Iface:virbr3 ExpiryTime:2025-01-27 15:06:10 +0000 UTC Type:0 Mac:52:54:00:5b:2c:b1 Iaid: IPaddr:192.168.61.72 Prefix:24 Hostname:pause-966446 Clientid:01:52:54:00:5b:2c:b1}
	I0127 14:07:51.314723  601531 main.go:141] libmachine: (pause-966446) DBG | domain pause-966446 has defined IP address 192.168.61.72 and MAC address 52:54:00:5b:2c:b1 in network mk-pause-966446
	I0127 14:07:51.315015  601531 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 14:07:51.342772  601531 kubeadm.go:883] updating cluster {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:07:51.342916  601531 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:07:51.342980  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.540139  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.540176  601531 crio.go:433] Images already preloaded, skipping extraction
	I0127 14:07:51.540245  601531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:07:51.723140  601531 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:07:51.723176  601531 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:07:51.723208  601531 kubeadm.go:934] updating node { 192.168.61.72 8443 v1.32.1 crio true true} ...
	I0127 14:07:51.723370  601531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-966446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:07:51.723451  601531 ssh_runner.go:195] Run: crio config
	I0127 14:07:51.833408  601531 cni.go:84] Creating CNI manager for ""
	I0127 14:07:51.833430  601531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:07:51.833440  601531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:07:51.833472  601531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.72 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-966446 NodeName:pause-966446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:07:51.833663  601531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-966446"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:07:51.833759  601531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:07:51.849643  601531 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:07:51.849734  601531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:07:51.859622  601531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 14:07:51.878933  601531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:07:51.927875  601531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 14:07:51.946086  601531 ssh_runner.go:195] Run: grep 192.168.61.72	control-plane.minikube.internal$ /etc/hosts
	I0127 14:07:51.954227  601531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:07:52.111702  601531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:07:52.136062  601531 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446 for IP: 192.168.61.72
	I0127 14:07:52.136089  601531 certs.go:194] generating shared ca certs ...
	I0127 14:07:52.136111  601531 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:07:52.136278  601531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:07:52.136342  601531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:07:52.136354  601531 certs.go:256] generating profile certs ...
	I0127 14:07:52.136983  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/client.key
	I0127 14:07:52.137115  601531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key.f1093c80
	I0127 14:07:52.137177  601531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key
	I0127 14:07:52.137354  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:07:52.137393  601531 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:07:52.137408  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:07:52.137445  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:07:52.137487  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:07:52.137518  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:07:52.137570  601531 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:07:52.139063  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:07:52.163001  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:07:52.186084  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:07:52.208609  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:07:52.231068  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:07:52.255538  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:07:52.279172  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:07:52.304122  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/pause-966446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:07:52.327911  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:07:52.350495  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:07:52.374464  601531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:07:52.413824  601531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:07:52.429570  601531 ssh_runner.go:195] Run: openssl version
	I0127 14:07:52.435317  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:07:52.446068  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450652  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.450700  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:07:52.456430  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:07:52.466347  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:07:52.478172  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483080  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.483133  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:07:52.488944  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:07:52.498827  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:07:52.510270  601531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.514978  601531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.515019  601531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:07:52.520460  601531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:07:52.529770  601531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:07:52.534208  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:07:52.539664  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:07:52.545155  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:07:52.550674  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:07:52.556058  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:07:52.561391  601531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:07:52.566877  601531 kubeadm.go:392] StartCluster: {Name:pause-966446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-966446 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.72 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:07:52.566970  601531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:07:52.567004  601531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:07:52.602509  601531 cri.go:89] found id: "9a4a6873a790179033815b842a490593ca7e247ab4c35927ab123d40b5b1c1b0"
	I0127 14:07:52.602533  601531 cri.go:89] found id: "153475c34d724a00aae02973ec25d6ba069b6798d663e0fb03fdcb678fbf90dc"
	I0127 14:07:52.602539  601531 cri.go:89] found id: "538bc3dc9efa53fa541ba54500003bc5a9f4ecc98ce84f4299f09c6519df409f"
	I0127 14:07:52.602544  601531 cri.go:89] found id: "ddaac33d82a8a7fca412c3f5cce780ba01829a09277d596b2eb83c688aa40627"
	I0127 14:07:52.602548  601531 cri.go:89] found id: "2fff1ca9ed0fb4d432dbddcbfba74d463e908d8e323e8f7da8389d0e159e27eb"
	I0127 14:07:52.602552  601531 cri.go:89] found id: "67099ee481deaf66bccd062bf3bbfde8a62b7a39d5819e92b57acf9ddbb3d637"
	I0127 14:07:52.602556  601531 cri.go:89] found id: ""
	I0127 14:07:52.602593  601531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-966446 -n pause-966446
helpers_test.go:261: (dbg) Run:  kubectl --context pause-966446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (63.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1645.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-742142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (27m23.432765817s)

                                                
                                                
-- stdout --
	* [embed-certs-742142] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-742142" primary control-plane node in "embed-certs-742142" cluster
	* Restarting existing kvm2 VM for "embed-certs-742142" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-742142 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:11:00.886427  603695 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:11:00.886548  603695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:11:00.886559  603695 out.go:358] Setting ErrFile to fd 2...
	I0127 14:11:00.886564  603695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:11:00.886733  603695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:11:00.887260  603695 out.go:352] Setting JSON to false
	I0127 14:11:00.888183  603695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17606,"bootTime":1737969455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:11:00.888284  603695 start.go:139] virtualization: kvm guest
	I0127 14:11:00.890140  603695 out.go:177] * [embed-certs-742142] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:11:00.891373  603695 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:11:00.891391  603695 notify.go:220] Checking for updates...
	I0127 14:11:00.893642  603695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:11:00.894882  603695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:11:00.896004  603695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:11:00.897157  603695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:11:00.898333  603695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:11:00.899824  603695 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:11:00.900220  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:11:00.900289  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:11:00.915000  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0127 14:11:00.915443  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:11:00.915912  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:11:00.915942  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:11:00.916308  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:11:00.916482  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:00.916729  603695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:11:00.916986  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:11:00.917019  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:11:00.931046  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0127 14:11:00.931383  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:11:00.931788  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:11:00.931806  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:11:00.932128  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:11:00.932331  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:00.966512  603695 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:11:00.967525  603695 start.go:297] selected driver: kvm2
	I0127 14:11:00.967536  603695 start.go:901] validating driver "kvm2" against &{Name:embed-certs-742142 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-742142 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.87 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:11:00.967623  603695 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:11:00.968254  603695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:11:00.968322  603695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:11:00.982117  603695 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:11:00.982510  603695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:11:00.982552  603695 cni.go:84] Creating CNI manager for ""
	I0127 14:11:00.982609  603695 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:11:00.982653  603695 start.go:340] cluster config:
	{Name:embed-certs-742142 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-742142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.87 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:11:00.982784  603695 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:11:00.984304  603695 out.go:177] * Starting "embed-certs-742142" primary control-plane node in "embed-certs-742142" cluster
	I0127 14:11:00.985391  603695 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:11:00.985429  603695 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:11:00.985437  603695 cache.go:56] Caching tarball of preloaded images
	I0127 14:11:00.985512  603695 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:11:00.985522  603695 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:11:00.985658  603695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/config.json ...
	I0127 14:11:00.985834  603695 start.go:360] acquireMachinesLock for embed-certs-742142: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:11:00.985876  603695 start.go:364] duration metric: took 25.429µs to acquireMachinesLock for "embed-certs-742142"
	I0127 14:11:00.985889  603695 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:11:00.985894  603695 fix.go:54] fixHost starting: 
	I0127 14:11:00.986174  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:11:00.986208  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:11:00.999588  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0127 14:11:00.999961  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:11:01.000398  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:11:01.000418  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:11:01.000713  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:11:01.000891  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:01.001038  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:11:01.002597  603695 fix.go:112] recreateIfNeeded on embed-certs-742142: state=Stopped err=<nil>
	I0127 14:11:01.002617  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	W0127 14:11:01.002770  603695 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:11:01.004383  603695 out.go:177] * Restarting existing kvm2 VM for "embed-certs-742142" ...
	I0127 14:11:01.005457  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Start
	I0127 14:11:01.005649  603695 main.go:141] libmachine: (embed-certs-742142) starting domain...
	I0127 14:11:01.005669  603695 main.go:141] libmachine: (embed-certs-742142) ensuring networks are active...
	I0127 14:11:01.006306  603695 main.go:141] libmachine: (embed-certs-742142) Ensuring network default is active
	I0127 14:11:01.006725  603695 main.go:141] libmachine: (embed-certs-742142) Ensuring network mk-embed-certs-742142 is active
	I0127 14:11:01.007131  603695 main.go:141] libmachine: (embed-certs-742142) getting domain XML...
	I0127 14:11:01.007892  603695 main.go:141] libmachine: (embed-certs-742142) creating domain...
	I0127 14:11:01.355426  603695 main.go:141] libmachine: (embed-certs-742142) waiting for IP...
	I0127 14:11:01.356412  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:01.356875  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:01.356956  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:01.356871  603730 retry.go:31] will retry after 203.172248ms: waiting for domain to come up
	I0127 14:11:01.561198  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:01.561906  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:01.561943  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:01.561884  603730 retry.go:31] will retry after 315.993074ms: waiting for domain to come up
	I0127 14:11:01.879450  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:01.879965  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:01.879993  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:01.879911  603730 retry.go:31] will retry after 339.747764ms: waiting for domain to come up
	I0127 14:11:02.221449  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:02.221952  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:02.221977  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:02.221943  603730 retry.go:31] will retry after 453.833682ms: waiting for domain to come up
	I0127 14:11:02.677824  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:02.678469  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:02.678501  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:02.678434  603730 retry.go:31] will retry after 697.486157ms: waiting for domain to come up
	I0127 14:11:03.377426  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:03.377921  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:03.377966  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:03.377884  603730 retry.go:31] will retry after 720.538334ms: waiting for domain to come up
	I0127 14:11:04.099523  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:04.099932  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:04.099961  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:04.099912  603730 retry.go:31] will retry after 1.097757479s: waiting for domain to come up
	I0127 14:11:05.199086  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:05.199563  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:05.199594  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:05.199524  603730 retry.go:31] will retry after 1.083737974s: waiting for domain to come up
	I0127 14:11:06.284483  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:06.285021  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:06.285051  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:06.284990  603730 retry.go:31] will retry after 1.795044902s: waiting for domain to come up
	I0127 14:11:08.464015  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:08.464579  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:08.464633  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:08.464567  603730 retry.go:31] will retry after 1.789388812s: waiting for domain to come up
	I0127 14:11:10.256732  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:10.257330  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:10.257362  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:10.257273  603730 retry.go:31] will retry after 2.532730955s: waiting for domain to come up
	I0127 14:11:12.792075  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:12.792596  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:12.792627  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:12.792550  603730 retry.go:31] will retry after 3.057790303s: waiting for domain to come up
	I0127 14:11:15.853692  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:15.854099  603695 main.go:141] libmachine: (embed-certs-742142) DBG | unable to find current IP address of domain embed-certs-742142 in network mk-embed-certs-742142
	I0127 14:11:15.854129  603695 main.go:141] libmachine: (embed-certs-742142) DBG | I0127 14:11:15.854043  603730 retry.go:31] will retry after 3.00102881s: waiting for domain to come up
	I0127 14:11:18.858441  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.859975  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has current primary IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.859997  603695 main.go:141] libmachine: (embed-certs-742142) found domain IP: 192.168.61.87
	I0127 14:11:18.860010  603695 main.go:141] libmachine: (embed-certs-742142) reserving static IP address...
	I0127 14:11:18.860431  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "embed-certs-742142", mac: "52:54:00:44:84:6b", ip: "192.168.61.87"} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:18.860456  603695 main.go:141] libmachine: (embed-certs-742142) reserved static IP address 192.168.61.87 for domain embed-certs-742142
	I0127 14:11:18.860468  603695 main.go:141] libmachine: (embed-certs-742142) DBG | skip adding static IP to network mk-embed-certs-742142 - found existing host DHCP lease matching {name: "embed-certs-742142", mac: "52:54:00:44:84:6b", ip: "192.168.61.87"}
	I0127 14:11:18.860481  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Getting to WaitForSSH function...
	I0127 14:11:18.860493  603695 main.go:141] libmachine: (embed-certs-742142) waiting for SSH...
	I0127 14:11:18.862642  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.862882  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:18.862910  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.862990  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Using SSH client type: external
	I0127 14:11:18.863026  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa (-rw-------)
	I0127 14:11:18.863061  603695 main.go:141] libmachine: (embed-certs-742142) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:11:18.863074  603695 main.go:141] libmachine: (embed-certs-742142) DBG | About to run SSH command:
	I0127 14:11:18.863088  603695 main.go:141] libmachine: (embed-certs-742142) DBG | exit 0
	I0127 14:11:18.980821  603695 main.go:141] libmachine: (embed-certs-742142) DBG | SSH cmd err, output: <nil>: 
	I0127 14:11:18.981153  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetConfigRaw
	I0127 14:11:18.981824  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetIP
	I0127 14:11:18.984101  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.984411  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:18.984440  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.984648  603695 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/config.json ...
	I0127 14:11:18.984831  603695 machine.go:93] provisionDockerMachine start ...
	I0127 14:11:18.984850  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:18.985032  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:18.987339  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.987661  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:18.987689  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:18.987817  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:18.987990  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:18.988117  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:18.988260  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:18.988384  603695 main.go:141] libmachine: Using SSH client type: native
	I0127 14:11:18.988569  603695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.87 22 <nil> <nil>}
	I0127 14:11:18.988580  603695 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:11:19.085401  603695 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:11:19.085436  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetMachineName
	I0127 14:11:19.085656  603695 buildroot.go:166] provisioning hostname "embed-certs-742142"
	I0127 14:11:19.085680  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetMachineName
	I0127 14:11:19.085856  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.088159  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.088532  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.088558  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.088670  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.088831  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.088955  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.089109  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.089267  603695 main.go:141] libmachine: Using SSH client type: native
	I0127 14:11:19.089423  603695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.87 22 <nil> <nil>}
	I0127 14:11:19.089437  603695 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-742142 && echo "embed-certs-742142" | sudo tee /etc/hostname
	I0127 14:11:19.201561  603695 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-742142
	
	I0127 14:11:19.201629  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.204454  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.204820  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.204863  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.205055  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.205241  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.205406  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.205504  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.205648  603695 main.go:141] libmachine: Using SSH client type: native
	I0127 14:11:19.205868  603695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.87 22 <nil> <nil>}
	I0127 14:11:19.205894  603695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-742142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-742142/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-742142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:11:19.313661  603695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:11:19.313684  603695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:11:19.313700  603695 buildroot.go:174] setting up certificates
	I0127 14:11:19.313710  603695 provision.go:84] configureAuth start
	I0127 14:11:19.313731  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetMachineName
	I0127 14:11:19.313940  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetIP
	I0127 14:11:19.316379  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.316719  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.316756  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.316903  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.319018  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.319328  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.319370  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.319552  603695 provision.go:143] copyHostCerts
	I0127 14:11:19.319612  603695 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:11:19.319623  603695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:11:19.319689  603695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:11:19.319823  603695 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:11:19.319836  603695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:11:19.319867  603695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:11:19.319930  603695 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:11:19.319937  603695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:11:19.319959  603695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:11:19.320007  603695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.embed-certs-742142 san=[127.0.0.1 192.168.61.87 embed-certs-742142 localhost minikube]
	I0127 14:11:19.431931  603695 provision.go:177] copyRemoteCerts
	I0127 14:11:19.431975  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:11:19.431991  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.434002  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.434287  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.434313  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.434441  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.434604  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.434740  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.434888  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:11:19.510515  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:11:19.533679  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 14:11:19.556663  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:11:19.580110  603695 provision.go:87] duration metric: took 266.388163ms to configureAuth
	I0127 14:11:19.580140  603695 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:11:19.580316  603695 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:11:19.580438  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.582944  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.583267  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.583291  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.583533  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.583705  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.583840  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.583965  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.584101  603695 main.go:141] libmachine: Using SSH client type: native
	I0127 14:11:19.584254  603695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.87 22 <nil> <nil>}
	I0127 14:11:19.584268  603695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:11:19.795442  603695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:11:19.795489  603695 machine.go:96] duration metric: took 810.625388ms to provisionDockerMachine
	I0127 14:11:19.795524  603695 start.go:293] postStartSetup for "embed-certs-742142" (driver="kvm2")
	I0127 14:11:19.795539  603695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:11:19.795563  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:19.795915  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:11:19.795972  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.798481  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.798804  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.798825  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.798996  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.799191  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.799353  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.799472  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:11:19.875712  603695 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:11:19.879925  603695 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:11:19.879950  603695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:11:19.880009  603695 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:11:19.880123  603695 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:11:19.880252  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:11:19.889758  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:11:19.916449  603695 start.go:296] duration metric: took 120.905914ms for postStartSetup
	I0127 14:11:19.916514  603695 fix.go:56] duration metric: took 18.930604236s for fixHost
	I0127 14:11:19.916546  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:19.919318  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.919715  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:19.919748  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:19.919925  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:19.920126  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.920318  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:19.920433  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:19.920595  603695 main.go:141] libmachine: Using SSH client type: native
	I0127 14:11:19.920816  603695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.87 22 <nil> <nil>}
	I0127 14:11:19.920833  603695 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:11:20.021525  603695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987079.997333797
	
	I0127 14:11:20.021544  603695 fix.go:216] guest clock: 1737987079.997333797
	I0127 14:11:20.021550  603695 fix.go:229] Guest: 2025-01-27 14:11:19.997333797 +0000 UTC Remote: 2025-01-27 14:11:19.916524398 +0000 UTC m=+19.066896930 (delta=80.809399ms)
	I0127 14:11:20.021599  603695 fix.go:200] guest clock delta is within tolerance: 80.809399ms
	I0127 14:11:20.021611  603695 start.go:83] releasing machines lock for "embed-certs-742142", held for 19.035725735s
	I0127 14:11:20.021635  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:20.021856  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetIP
	I0127 14:11:20.024152  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.024531  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:20.024555  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.024666  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:20.025105  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:20.025281  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:11:20.025387  603695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:11:20.025429  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:20.025483  603695 ssh_runner.go:195] Run: cat /version.json
	I0127 14:11:20.025511  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:11:20.028226  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.028315  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.028614  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:20.028643  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.028666  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:20.028684  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:20.028722  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:20.028930  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:20.028956  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:11:20.029049  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:20.029121  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:11:20.029213  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:11:20.029266  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:11:20.029370  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:11:20.103687  603695 ssh_runner.go:195] Run: systemctl --version
	I0127 14:11:20.127559  603695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:11:20.282794  603695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:11:20.289014  603695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:11:20.289086  603695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:11:20.305630  603695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:11:20.305651  603695 start.go:495] detecting cgroup driver to use...
	I0127 14:11:20.305718  603695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:11:20.322029  603695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:11:20.335912  603695 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:11:20.335980  603695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:11:20.349081  603695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:11:20.362888  603695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:11:20.473094  603695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:11:20.638666  603695 docker.go:233] disabling docker service ...
	I0127 14:11:20.638746  603695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:11:20.654504  603695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:11:20.669742  603695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:11:20.798355  603695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:11:20.926805  603695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:11:20.943227  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:11:20.965421  603695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:11:20.965491  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:20.976269  603695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:11:20.976339  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:20.986928  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:20.997276  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:21.008473  603695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:11:21.019657  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:21.030196  603695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:21.047468  603695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:11:21.057976  603695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:11:21.067237  603695 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:11:21.067288  603695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:11:21.079791  603695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:11:21.089344  603695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:11:21.216192  603695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:11:21.325985  603695 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:11:21.326090  603695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:11:21.331072  603695 start.go:563] Will wait 60s for crictl version
	I0127 14:11:21.331140  603695 ssh_runner.go:195] Run: which crictl
	I0127 14:11:21.334882  603695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:11:21.375846  603695 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:11:21.375938  603695 ssh_runner.go:195] Run: crio --version
	I0127 14:11:21.403286  603695 ssh_runner.go:195] Run: crio --version
	I0127 14:11:21.433472  603695 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:11:21.434783  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetIP
	I0127 14:11:21.437695  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:21.438118  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:11:21.438146  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:11:21.438365  603695 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 14:11:21.442481  603695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:11:21.455193  603695 kubeadm.go:883] updating cluster {Name:embed-certs-742142 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-742142 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.87 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:11:21.455344  603695 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:11:21.455410  603695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:11:21.494974  603695 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:11:21.495025  603695 ssh_runner.go:195] Run: which lz4
	I0127 14:11:21.499276  603695 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:11:21.503641  603695 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:11:21.503664  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:11:22.895550  603695 crio.go:462] duration metric: took 1.39629182s to copy over tarball
	I0127 14:11:22.895628  603695 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:11:25.042607  603695 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.146934254s)
	I0127 14:11:25.042656  603695 crio.go:469] duration metric: took 2.147072261s to extract the tarball
	I0127 14:11:25.042665  603695 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:11:25.080702  603695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:11:25.121801  603695 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:11:25.121827  603695 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:11:25.121837  603695 kubeadm.go:934] updating node { 192.168.61.87 8443 v1.32.1 crio true true} ...
	I0127 14:11:25.121945  603695 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-742142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-742142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:11:25.122011  603695 ssh_runner.go:195] Run: crio config
	I0127 14:11:25.166697  603695 cni.go:84] Creating CNI manager for ""
	I0127 14:11:25.166723  603695 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:11:25.166736  603695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:11:25.166775  603695 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.87 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-742142 NodeName:embed-certs-742142 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:11:25.166946  603695 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-742142"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.87"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.87"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:11:25.167025  603695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:11:25.176743  603695 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:11:25.176806  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:11:25.186140  603695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 14:11:25.202300  603695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:11:25.218782  603695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0127 14:11:25.235310  603695 ssh_runner.go:195] Run: grep 192.168.61.87	control-plane.minikube.internal$ /etc/hosts
	I0127 14:11:25.239012  603695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:11:25.250518  603695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:11:25.382875  603695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:11:25.399961  603695 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142 for IP: 192.168.61.87
	I0127 14:11:25.399983  603695 certs.go:194] generating shared ca certs ...
	I0127 14:11:25.400012  603695 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:11:25.400221  603695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:11:25.400276  603695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:11:25.400291  603695 certs.go:256] generating profile certs ...
	I0127 14:11:25.400411  603695 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/client.key
	I0127 14:11:25.400486  603695 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/apiserver.key.9f7f63f3
	I0127 14:11:25.400546  603695 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/proxy-client.key
	I0127 14:11:25.400702  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:11:25.400741  603695 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:11:25.400756  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:11:25.400791  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:11:25.400824  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:11:25.400854  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:11:25.400908  603695 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:11:25.401837  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:11:25.434819  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:11:25.466932  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:11:25.517167  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:11:25.552539  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 14:11:25.578476  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:11:25.601993  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:11:25.626167  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/embed-certs-742142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:11:25.655342  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:11:25.678789  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:11:25.702278  603695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:11:25.725557  603695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:11:25.742169  603695 ssh_runner.go:195] Run: openssl version
	I0127 14:11:25.747819  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:11:25.758042  603695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:25.762497  603695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:25.762587  603695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:11:25.768195  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:11:25.778225  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:11:25.788320  603695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:11:25.792660  603695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:11:25.792704  603695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:11:25.798163  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:11:25.808202  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:11:25.818757  603695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:11:25.823067  603695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:11:25.823108  603695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:11:25.828460  603695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:11:25.838359  603695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:11:25.842736  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:11:25.848436  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:11:25.854264  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:11:25.859830  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:11:25.865345  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:11:25.870917  603695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:11:25.876414  603695 kubeadm.go:392] StartCluster: {Name:embed-certs-742142 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-742142 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.87 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:11:25.876498  603695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:11:25.876536  603695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:11:25.912229  603695 cri.go:89] found id: ""
	I0127 14:11:25.912293  603695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:11:25.921756  603695 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:11:25.921773  603695 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:11:25.921812  603695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:11:25.930809  603695 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:11:25.931456  603695 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-742142" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:11:25.931722  603695 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-555419/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-742142" cluster setting kubeconfig missing "embed-certs-742142" context setting]
	I0127 14:11:25.932228  603695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:11:25.933548  603695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:11:25.942601  603695 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.87
	I0127 14:11:25.942632  603695 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:11:25.942646  603695 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 14:11:25.942695  603695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:11:25.981268  603695 cri.go:89] found id: ""
	I0127 14:11:25.981354  603695 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:11:25.998216  603695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:11:26.007164  603695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:11:26.007182  603695 kubeadm.go:157] found existing configuration files:
	
	I0127 14:11:26.007221  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:11:26.016274  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:11:26.016321  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:11:26.025330  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:11:26.034210  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:11:26.034257  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:11:26.043465  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:11:26.052023  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:11:26.052075  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:11:26.060730  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:11:26.068984  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:11:26.069024  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:11:26.077691  603695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:11:26.086460  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:26.204108  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:27.516233  603695 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.312079325s)
	I0127 14:11:27.516276  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:27.740421  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:27.843320  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:27.953590  603695 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:11:27.953690  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:28.453788  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:28.954485  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:29.453843  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:29.953928  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:30.454372  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:11:30.475418  603695 api_server.go:72] duration metric: took 2.521857764s to wait for apiserver process to appear ...
	I0127 14:11:30.475451  603695 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:11:30.475476  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:32.969856  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:11:32.969895  603695 api_server.go:103] status: https://192.168.61.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:11:32.969914  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:33.013365  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:11:33.013398  603695 api_server.go:103] status: https://192.168.61.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:11:33.013415  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:33.072914  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 14:11:33.072958  603695 api_server.go:103] status: https://192.168.61.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 14:11:33.476596  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:33.482925  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:11:33.482972  603695 api_server.go:103] status: https://192.168.61.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:11:33.976192  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:33.993036  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 14:11:33.993071  603695 api_server.go:103] status: https://192.168.61.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 14:11:34.475678  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:11:34.481321  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 200:
	ok
	I0127 14:11:34.489731  603695 api_server.go:141] control plane version: v1.32.1
	I0127 14:11:34.489769  603695 api_server.go:131] duration metric: took 4.014307548s to wait for apiserver health ...
	I0127 14:11:34.489783  603695 cni.go:84] Creating CNI manager for ""
	I0127 14:11:34.489792  603695 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:11:34.491265  603695 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:11:34.492575  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:11:34.512294  603695 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:11:34.581023  603695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:11:34.599189  603695 system_pods.go:59] 8 kube-system pods found
	I0127 14:11:34.599250  603695 system_pods.go:61] "coredns-668d6bf9bc-llkzr" [c3364308-569c-4386-ba4c-20ef713dd324] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:11:34.599264  603695 system_pods.go:61] "etcd-embed-certs-742142" [938bdda6-8caf-41a7-a4ed-6b19ff5f4936] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 14:11:34.599276  603695 system_pods.go:61] "kube-apiserver-embed-certs-742142" [14a58975-2427-4faf-8a17-41a8a5059b36] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 14:11:34.599289  603695 system_pods.go:61] "kube-controller-manager-embed-certs-742142" [b0d22b4e-eb65-4173-85b9-49113459f274] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 14:11:34.599301  603695 system_pods.go:61] "kube-proxy-knzhx" [74f36c33-8375-4a54-b174-17f8bf740726] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 14:11:34.599312  603695 system_pods.go:61] "kube-scheduler-embed-certs-742142" [72683ea6-fc6a-4261-953a-4bdb1b204ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 14:11:34.599324  603695 system_pods.go:61] "metrics-server-f79f97bbb-8jkfz" [173978b3-db86-42c5-99ca-9306961c5117] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:11:34.599341  603695 system_pods.go:61] "storage-provisioner" [e37312ed-d8ef-46fd-bbef-f1b4ea213e92] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 14:11:34.599354  603695 system_pods.go:74] duration metric: took 18.299386ms to wait for pod list to return data ...
	I0127 14:11:34.599366  603695 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:11:34.603001  603695 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:11:34.603035  603695 node_conditions.go:123] node cpu capacity is 2
	I0127 14:11:34.603049  603695 node_conditions.go:105] duration metric: took 3.677052ms to run NodePressure ...
	I0127 14:11:34.603073  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:11:34.891813  603695 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 14:11:34.897095  603695 kubeadm.go:739] kubelet initialised
	I0127 14:11:34.897121  603695 kubeadm.go:740] duration metric: took 5.27478ms waiting for restarted kubelet to initialise ...
	I0127 14:11:34.897132  603695 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:11:34.902769  603695 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:36.908373  603695 pod_ready.go:103] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:38.909087  603695 pod_ready.go:103] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:41.409053  603695 pod_ready.go:103] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:43.909779  603695 pod_ready.go:103] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:45.910767  603695 pod_ready.go:103] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:46.409453  603695 pod_ready.go:93] pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.409484  603695 pod_ready.go:82] duration metric: took 11.506688827s for pod "coredns-668d6bf9bc-llkzr" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.409503  603695 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.413703  603695 pod_ready.go:93] pod "etcd-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.413721  603695 pod_ready.go:82] duration metric: took 4.207746ms for pod "etcd-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.413730  603695 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.420014  603695 pod_ready.go:93] pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.420035  603695 pod_ready.go:82] duration metric: took 6.298876ms for pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.420047  603695 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.424425  603695 pod_ready.go:93] pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.424445  603695 pod_ready.go:82] duration metric: took 4.39ms for pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.424456  603695 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-knzhx" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.428346  603695 pod_ready.go:93] pod "kube-proxy-knzhx" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.428365  603695 pod_ready.go:82] duration metric: took 3.900498ms for pod "kube-proxy-knzhx" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.428376  603695 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.808199  603695 pod_ready.go:93] pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:11:46.808231  603695 pod_ready.go:82] duration metric: took 379.846447ms for pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:46.808246  603695 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace to be "Ready" ...
	I0127 14:11:48.817540  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:51.314054  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:53.314517  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:55.816490  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:57.817751  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:11:59.818332  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:01.819300  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:04.315829  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:06.817854  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:09.316196  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:11.818418  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:14.313935  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:16.315615  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:18.820472  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:21.315797  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:23.819119  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:26.315531  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:28.818698  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:31.314765  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:33.315058  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:35.813737  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:37.816549  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:40.314657  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:42.315499  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:44.317651  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:46.815658  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:48.816398  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:51.315064  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:53.315680  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:55.816454  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:12:58.314033  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:00.315415  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:02.815716  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:04.816841  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:07.315327  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:09.315582  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:11.817629  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:13.817909  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:16.315188  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:18.816416  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:21.316016  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:23.816399  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:26.314401  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:28.315917  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:30.316308  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:32.321345  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:34.814529  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:37.314220  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:39.314549  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:41.814430  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:43.817340  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:46.314914  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:48.315425  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:50.316390  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:52.318654  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:54.814148  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:56.815243  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:13:59.315293  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:01.814736  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:04.314840  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:06.814182  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:08.815853  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:11.314724  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:13.815567  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:16.314437  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:18.316407  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:20.814151  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:23.314677  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:25.314762  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:27.815287  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:29.817677  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:32.316252  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:34.814966  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:36.814995  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:38.815691  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:41.313866  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:43.314495  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:45.315248  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:47.815819  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:50.315154  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:52.315297  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:54.815141  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:56.815305  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:14:59.315141  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:01.815891  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:04.315646  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:06.815426  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:09.314276  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:11.315868  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:13.814849  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:16.314363  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:18.315087  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:20.813172  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:22.814694  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:25.317445  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:27.815785  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:30.315055  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:32.814690  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:35.315447  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:37.814379  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:40.314855  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:42.814534  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:45.314225  603695 pod_ready.go:103] pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace has status "Ready":"False"
	I0127 14:15:46.808574  603695 pod_ready.go:82] duration metric: took 4m0.000298s for pod "metrics-server-f79f97bbb-8jkfz" in "kube-system" namespace to be "Ready" ...
	E0127 14:15:46.808607  603695 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 14:15:46.808627  603695 pod_ready.go:39] duration metric: took 4m11.911484711s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:15:46.808663  603695 kubeadm.go:597] duration metric: took 4m20.886882086s to restartPrimaryControlPlane
	W0127 14:15:46.808730  603695 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:15:46.808765  603695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:16:14.633034  603695 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.824238947s)
	I0127 14:16:14.633132  603695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:16:14.648225  603695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:16:14.658742  603695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:16:14.667996  603695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:16:14.668012  603695 kubeadm.go:157] found existing configuration files:
	
	I0127 14:16:14.668050  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:16:14.676663  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:16:14.676716  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:16:14.685648  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:16:14.694521  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:16:14.694585  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:16:14.703387  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:16:14.712006  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:16:14.712049  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:16:14.720953  603695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:16:14.729666  603695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:16:14.729706  603695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:16:14.738476  603695 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:16:14.903256  603695 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:16:22.973869  603695 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:16:22.973974  603695 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:16:22.974081  603695 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:16:22.974215  603695 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:16:22.974374  603695 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:16:22.974468  603695 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:16:22.975742  603695 out.go:235]   - Generating certificates and keys ...
	I0127 14:16:22.975828  603695 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:16:22.975884  603695 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:16:22.975956  603695 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:16:22.976041  603695 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:16:22.976132  603695 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:16:22.976180  603695 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:16:22.976235  603695 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:16:22.976291  603695 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:16:22.976360  603695 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:16:22.976446  603695 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:16:22.976496  603695 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:16:22.976551  603695 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:16:22.976618  603695 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:16:22.976688  603695 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:16:22.976739  603695 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:16:22.976804  603695 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:16:22.976861  603695 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:16:22.976931  603695 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:16:22.976996  603695 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:16:22.978370  603695 out.go:235]   - Booting up control plane ...
	I0127 14:16:22.978464  603695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:16:22.978544  603695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:16:22.978619  603695 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:16:22.978758  603695 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:16:22.978907  603695 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:16:22.978962  603695 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:16:22.979123  603695 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:16:22.979257  603695 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:16:22.979349  603695 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388311ms
	I0127 14:16:22.979472  603695 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:16:22.979531  603695 kubeadm.go:310] [api-check] The API server is healthy after 5.001291238s
	I0127 14:16:22.979663  603695 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:16:22.979830  603695 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:16:22.979916  603695 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:16:22.980133  603695 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-742142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:16:22.980184  603695 kubeadm.go:310] [bootstrap-token] Using token: yt8wzc.q5kg13ruqkambe36
	I0127 14:16:22.981465  603695 out.go:235]   - Configuring RBAC rules ...
	I0127 14:16:22.981568  603695 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:16:22.981660  603695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:16:22.981776  603695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:16:22.981874  603695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:16:22.981968  603695 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:16:22.982035  603695 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:16:22.982125  603695 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:16:22.982175  603695 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:16:22.982234  603695 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:16:22.982243  603695 kubeadm.go:310] 
	I0127 14:16:22.982302  603695 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:16:22.982310  603695 kubeadm.go:310] 
	I0127 14:16:22.982373  603695 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:16:22.982380  603695 kubeadm.go:310] 
	I0127 14:16:22.982408  603695 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:16:22.982459  603695 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:16:22.982505  603695 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:16:22.982511  603695 kubeadm.go:310] 
	I0127 14:16:22.982559  603695 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:16:22.982565  603695 kubeadm.go:310] 
	I0127 14:16:22.982618  603695 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:16:22.982633  603695 kubeadm.go:310] 
	I0127 14:16:22.982676  603695 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:16:22.982744  603695 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:16:22.982813  603695 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:16:22.982820  603695 kubeadm.go:310] 
	I0127 14:16:22.982891  603695 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:16:22.982956  603695 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:16:22.982961  603695 kubeadm.go:310] 
	I0127 14:16:22.983035  603695 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yt8wzc.q5kg13ruqkambe36 \
	I0127 14:16:22.983126  603695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 \
	I0127 14:16:22.983149  603695 kubeadm.go:310] 	--control-plane 
	I0127 14:16:22.983153  603695 kubeadm.go:310] 
	I0127 14:16:22.983221  603695 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:16:22.983227  603695 kubeadm.go:310] 
	I0127 14:16:22.983294  603695 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yt8wzc.q5kg13ruqkambe36 \
	I0127 14:16:22.983400  603695 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 
	I0127 14:16:22.983431  603695 cni.go:84] Creating CNI manager for ""
	I0127 14:16:22.983444  603695 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:16:22.984844  603695 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:16:22.985906  603695 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:16:22.998213  603695 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:16:23.019184  603695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:16:23.019257  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:23.019276  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-742142 minikube.k8s.io/updated_at=2025_01_27T14_16_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=embed-certs-742142 minikube.k8s.io/primary=true
	I0127 14:16:23.262506  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:23.304859  603695 ops.go:34] apiserver oom_adj: -16
	I0127 14:16:23.763175  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:24.263403  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:24.763597  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:25.263109  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:25.762557  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:26.263201  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:26.762817  603695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:16:26.853507  603695 kubeadm.go:1113] duration metric: took 3.834305793s to wait for elevateKubeSystemPrivileges
	I0127 14:16:26.853544  603695 kubeadm.go:394] duration metric: took 5m0.977137553s to StartCluster
	I0127 14:16:26.853567  603695 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:16:26.853681  603695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:16:26.855157  603695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:16:26.855456  603695 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.87 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:16:26.855593  603695 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:16:26.855672  603695 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-742142"
	I0127 14:16:26.855683  603695 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:16:26.855700  603695 addons.go:69] Setting metrics-server=true in profile "embed-certs-742142"
	I0127 14:16:26.855714  603695 addons.go:238] Setting addon metrics-server=true in "embed-certs-742142"
	W0127 14:16:26.855725  603695 addons.go:247] addon metrics-server should already be in state true
	I0127 14:16:26.855691  603695 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-742142"
	I0127 14:16:26.855719  603695 addons.go:69] Setting default-storageclass=true in profile "embed-certs-742142"
	W0127 14:16:26.855745  603695 addons.go:247] addon storage-provisioner should already be in state true
	I0127 14:16:26.855736  603695 addons.go:69] Setting dashboard=true in profile "embed-certs-742142"
	I0127 14:16:26.855775  603695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-742142"
	I0127 14:16:26.855794  603695 host.go:66] Checking if "embed-certs-742142" exists ...
	I0127 14:16:26.855802  603695 addons.go:238] Setting addon dashboard=true in "embed-certs-742142"
	W0127 14:16:26.855815  603695 addons.go:247] addon dashboard should already be in state true
	I0127 14:16:26.855850  603695 host.go:66] Checking if "embed-certs-742142" exists ...
	I0127 14:16:26.855780  603695 host.go:66] Checking if "embed-certs-742142" exists ...
	I0127 14:16:26.856256  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.856277  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.856277  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.856259  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.856305  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.856318  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.856464  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.856551  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.857069  603695 out.go:177] * Verifying Kubernetes components...
	I0127 14:16:26.858506  603695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:16:26.894565  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0127 14:16:26.894586  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0127 14:16:26.895225  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.895244  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.895858  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.895878  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.895933  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0127 14:16:26.896000  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.896021  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.896205  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35501
	I0127 14:16:26.896342  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.896387  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.896725  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.896726  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.896813  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.896832  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.896868  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:16:26.896984  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.897031  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.897148  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.897170  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.897186  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.897498  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.897704  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.897742  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.898014  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.898051  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.900347  603695 addons.go:238] Setting addon default-storageclass=true in "embed-certs-742142"
	W0127 14:16:26.900365  603695 addons.go:247] addon default-storageclass should already be in state true
	I0127 14:16:26.900385  603695 host.go:66] Checking if "embed-certs-742142" exists ...
	I0127 14:16:26.900627  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.900661  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.914753  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I0127 14:16:26.914902  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0127 14:16:26.915278  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.915680  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.915851  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.915862  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.916244  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.916324  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.916340  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.916575  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:16:26.917219  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0127 14:16:26.917252  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.917487  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:16:26.917656  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.918167  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.918190  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.918571  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.918758  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:16:26.918978  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:16:26.920310  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41589
	I0127 14:16:26.920357  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:16:26.920900  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:16:26.920921  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.921296  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.921307  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.921369  603695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:16:26.921552  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.922111  603695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:16:26.922160  603695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:16:26.922469  603695 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 14:16:26.922470  603695 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 14:16:26.922660  603695 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:16:26.922672  603695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:16:26.922687  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:16:26.923497  603695 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:16:26.923515  603695 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:16:26.923533  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:16:26.924461  603695 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 14:16:26.926177  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 14:16:26.926193  603695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 14:16:26.926207  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:16:26.926386  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.926651  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:16:26.926851  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:16:26.926922  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.926990  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.927386  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:16:26.927589  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:16:26.927756  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:16:26.927819  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:16:26.927955  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:16:26.927988  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.928039  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:16:26.928173  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:16:26.928350  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:16:26.929479  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.929894  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:16:26.929925  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.930182  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:16:26.930351  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:16:26.930503  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:16:26.930636  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:16:26.940004  603695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0127 14:16:26.940693  603695 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:16:26.941150  603695 main.go:141] libmachine: Using API Version  1
	I0127 14:16:26.941328  603695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:16:26.941651  603695 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:16:26.941896  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetState
	I0127 14:16:26.943283  603695 main.go:141] libmachine: (embed-certs-742142) Calling .DriverName
	I0127 14:16:26.943505  603695 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:16:26.943521  603695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:16:26.943537  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHHostname
	I0127 14:16:26.947813  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.947910  603695 main.go:141] libmachine: (embed-certs-742142) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:84:6b", ip: ""} in network mk-embed-certs-742142: {Iface:virbr3 ExpiryTime:2025-01-27 15:11:11 +0000 UTC Type:0 Mac:52:54:00:44:84:6b Iaid: IPaddr:192.168.61.87 Prefix:24 Hostname:embed-certs-742142 Clientid:01:52:54:00:44:84:6b}
	I0127 14:16:26.947931  603695 main.go:141] libmachine: (embed-certs-742142) DBG | domain embed-certs-742142 has defined IP address 192.168.61.87 and MAC address 52:54:00:44:84:6b in network mk-embed-certs-742142
	I0127 14:16:26.948064  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHPort
	I0127 14:16:26.948203  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHKeyPath
	I0127 14:16:26.948297  603695 main.go:141] libmachine: (embed-certs-742142) Calling .GetSSHUsername
	I0127 14:16:26.948384  603695 sshutil.go:53] new ssh client: &{IP:192.168.61.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/embed-certs-742142/id_rsa Username:docker}
	I0127 14:16:27.042404  603695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:16:27.067319  603695 node_ready.go:35] waiting up to 6m0s for node "embed-certs-742142" to be "Ready" ...
	I0127 14:16:27.092286  603695 node_ready.go:49] node "embed-certs-742142" has status "Ready":"True"
	I0127 14:16:27.092306  603695 node_ready.go:38] duration metric: took 24.957249ms for node "embed-certs-742142" to be "Ready" ...
	I0127 14:16:27.092316  603695 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:16:27.106129  603695 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:27.117421  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 14:16:27.117475  603695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 14:16:27.140845  603695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:16:27.153385  603695 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:16:27.153404  603695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 14:16:27.162891  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 14:16:27.162910  603695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 14:16:27.176192  603695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:16:27.188726  603695 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:16:27.188752  603695 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:16:27.215699  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 14:16:27.215724  603695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 14:16:27.289494  603695 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:16:27.289527  603695 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:16:27.315149  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 14:16:27.315171  603695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 14:16:27.372046  603695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:16:27.374639  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 14:16:27.374663  603695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 14:16:27.441880  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 14:16:27.441921  603695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 14:16:27.480938  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:27.480971  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:27.481362  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:27.481389  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:27.481400  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:27.481424  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:27.481490  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:27.481739  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:27.481760  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:27.505026  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:27.505049  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:27.505336  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:27.505368  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:27.505388  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:27.539607  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 14:16:27.539636  603695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 14:16:27.625424  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 14:16:27.625459  603695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 14:16:27.704480  603695 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:16:27.704521  603695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 14:16:27.746119  603695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 14:16:28.150076  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:28.150105  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:28.150514  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:28.150540  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:28.150550  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:28.150559  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:28.150551  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:28.150911  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:28.150916  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:28.150927  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:28.633672  603695 pod_ready.go:93] pod "etcd-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:28.633697  603695 pod_ready.go:82] duration metric: took 1.527547233s for pod "etcd-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:28.633707  603695 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:28.685989  603695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313884993s)
	I0127 14:16:28.686062  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:28.686081  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:28.686489  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:28.686501  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:28.686517  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:28.686527  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:28.686537  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:28.686799  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:28.686817  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:28.686831  603695 addons.go:479] Verifying addon metrics-server=true in "embed-certs-742142"
	I0127 14:16:30.109637  603695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.363461126s)
	I0127 14:16:30.109703  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:30.109724  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:30.110039  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:30.110059  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:30.110075  603695 main.go:141] libmachine: Making call to close driver server
	I0127 14:16:30.110083  603695 main.go:141] libmachine: (embed-certs-742142) Calling .Close
	I0127 14:16:30.112536  603695 main.go:141] libmachine: (embed-certs-742142) DBG | Closing plugin on server side
	I0127 14:16:30.112549  603695 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:16:30.112571  603695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:16:30.113904  603695 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-742142 addons enable metrics-server
	
	I0127 14:16:30.115167  603695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 14:16:30.116311  603695 addons.go:514] duration metric: took 3.26072699s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 14:16:30.643680  603695 pod_ready.go:103] pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:32.645941  603695 pod_ready.go:103] pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:35.353216  603695 pod_ready.go:103] pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace has status "Ready":"False"
	I0127 14:16:35.641355  603695 pod_ready.go:93] pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:35.641391  603695 pod_ready.go:82] duration metric: took 7.007674845s for pod "kube-apiserver-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:35.641406  603695 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:35.646657  603695 pod_ready.go:93] pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:35.646682  603695 pod_ready.go:82] duration metric: took 5.267885ms for pod "kube-controller-manager-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:35.646696  603695 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:36.653021  603695 pod_ready.go:93] pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace has status "Ready":"True"
	I0127 14:16:36.653057  603695 pod_ready.go:82] duration metric: took 1.006351596s for pod "kube-scheduler-embed-certs-742142" in "kube-system" namespace to be "Ready" ...
	I0127 14:16:36.653069  603695 pod_ready.go:39] duration metric: took 9.560743195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:16:36.653104  603695 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:16:36.653184  603695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:36.691481  603695 api_server.go:72] duration metric: took 9.835986206s to wait for apiserver process to appear ...
	I0127 14:16:36.691507  603695 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:16:36.691526  603695 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0127 14:16:36.697160  603695 api_server.go:279] https://192.168.61.87:8443/healthz returned 200:
	ok
	I0127 14:16:36.698114  603695 api_server.go:141] control plane version: v1.32.1
	I0127 14:16:36.698140  603695 api_server.go:131] duration metric: took 6.626266ms to wait for apiserver health ...
	I0127 14:16:36.698151  603695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:16:36.704912  603695 system_pods.go:59] 9 kube-system pods found
	I0127 14:16:36.705009  603695 system_pods.go:61] "coredns-668d6bf9bc-hmkdd" [e4283df2-0988-4342-9de1-896ac5a40d86] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 14:16:36.705042  603695 system_pods.go:61] "coredns-668d6bf9bc-kc8bv" [7817b17a-8213-42da-8957-5d97c8df5059] Running
	I0127 14:16:36.705065  603695 system_pods.go:61] "etcd-embed-certs-742142" [9051a27a-5dd4-4c37-bd2c-7d6a2014d456] Running
	I0127 14:16:36.705097  603695 system_pods.go:61] "kube-apiserver-embed-certs-742142" [465854b3-e238-478a-b9bd-594d9a446013] Running
	I0127 14:16:36.705117  603695 system_pods.go:61] "kube-controller-manager-embed-certs-742142" [5e8b3961-73d8-4352-a482-79befdbf86b7] Running
	I0127 14:16:36.705135  603695 system_pods.go:61] "kube-proxy-lvbtr" [12e2d7ac-5dd4-45f1-957d-9189f9d6a607] Running
	I0127 14:16:36.705153  603695 system_pods.go:61] "kube-scheduler-embed-certs-742142" [ffd67936-1279-42c8-a16c-f513104d386b] Running
	I0127 14:16:36.705173  603695 system_pods.go:61] "metrics-server-f79f97bbb-kclqf" [9539cc67-38fb-45e9-9884-c251c427b7d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 14:16:36.705196  603695 system_pods.go:61] "storage-provisioner" [b842b41e-ceeb-4132-bf70-2443e4c27ab9] Running
	I0127 14:16:36.705217  603695 system_pods.go:74] duration metric: took 7.05845ms to wait for pod list to return data ...
	I0127 14:16:36.705238  603695 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:16:36.708427  603695 default_sa.go:45] found service account: "default"
	I0127 14:16:36.708451  603695 default_sa.go:55] duration metric: took 3.203917ms for default service account to be created ...
	I0127 14:16:36.708459  603695 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:16:36.713000  603695 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-742142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742142 -n embed-certs-742142
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-742142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-742142 logs -n 25: (1.549477943s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo docker                         | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo find                           | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo crio                           | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-418372                                     | bridge-418372          | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	| delete  | -p old-k8s-version-456130                            | old-k8s-version-456130 | jenkins | v1.35.0 | 27 Jan 25 14:37 UTC | 27 Jan 25 14:37 UTC |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:29:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:29:58.428259  619737 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:29:58.428355  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428363  619737 out.go:358] Setting ErrFile to fd 2...
	I0127 14:29:58.428369  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428556  619737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:29:58.429178  619737 out.go:352] Setting JSON to false
	I0127 14:29:58.430355  619737 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":18743,"bootTime":1737969455,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:29:58.430472  619737 start.go:139] virtualization: kvm guest
	I0127 14:29:58.432328  619737 out.go:177] * [bridge-418372] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:29:58.433847  619737 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:29:58.433841  619737 notify.go:220] Checking for updates...
	I0127 14:29:58.435064  619737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:29:58.436272  619737 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.437495  619737 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.438658  619737 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:29:54.794135  618007 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 14:29:54.800129  618007 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 14:29:54.800149  618007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0127 14:29:54.827977  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 14:29:55.354721  618007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:29:55.354799  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.354815  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-418372 minikube.k8s.io/updated_at=2025_01_27T14_29_55_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=flannel-418372 minikube.k8s.io/primary=true
	I0127 14:29:55.498477  618007 ops.go:34] apiserver oom_adj: -16
	I0127 14:29:55.498561  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.998885  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.499532  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.998893  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.499229  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.999484  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.440406  619737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:29:58.442063  619737 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442183  619737 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442310  619737 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:29:58.442439  619737 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:29:58.481913  619737 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:29:58.482984  619737 start.go:297] selected driver: kvm2
	I0127 14:29:58.482999  619737 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:29:58.483014  619737 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:29:58.483732  619737 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.483833  619737 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:29:58.500677  619737 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:29:58.500725  619737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:29:58.501048  619737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:29:58.501095  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:29:58.501112  619737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:29:58.501223  619737 start.go:340] cluster config:
	{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 14:29:58.501374  619737 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.502978  619737 out.go:177] * Starting "bridge-418372" primary control-plane node in "bridge-418372" cluster
	I0127 14:29:58.504138  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:29:58.504185  619737 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:29:58.504199  619737 cache.go:56] Caching tarball of preloaded images
	I0127 14:29:58.504311  619737 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:29:58.504327  619737 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:29:58.504450  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:29:58.504481  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json: {Name:mk097cf8466e36fa95d1648a8e56c4a0cdde1a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.504659  619737 start.go:360] acquireMachinesLock for bridge-418372: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:29:58.504713  619737 start.go:364] duration metric: took 30.62µs to acquireMachinesLock for "bridge-418372"
	I0127 14:29:58.504739  619737 start.go:93] Provisioning new machine with config: &{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.504825  619737 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:29:58.499356  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.598508  618007 kubeadm.go:1113] duration metric: took 3.243774581s to wait for elevateKubeSystemPrivileges
	I0127 14:29:58.598548  618007 kubeadm.go:394] duration metric: took 14.302797004s to StartCluster
	I0127 14:29:58.598576  618007 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.598660  618007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.600178  618007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.600419  618007 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.600467  618007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:29:58.600563  618007 addons.go:69] Setting storage-provisioner=true in profile "flannel-418372"
	I0127 14:29:58.600580  618007 addons.go:238] Setting addon storage-provisioner=true in "flannel-418372"
	I0127 14:29:58.600619  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.600452  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:29:58.600644  618007 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.600634  618007 addons.go:69] Setting default-storageclass=true in profile "flannel-418372"
	I0127 14:29:58.600706  618007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-418372"
	I0127 14:29:58.601115  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601158  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.601205  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601251  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.602065  618007 out.go:177] * Verifying Kubernetes components...
	I0127 14:29:58.603305  618007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:29:58.619130  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0127 14:29:58.619384  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0127 14:29:58.619700  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.619900  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.620429  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620455  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620610  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620627  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620955  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621103  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621621  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.621657  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.622065  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.625921  618007 addons.go:238] Setting addon default-storageclass=true in "flannel-418372"
	I0127 14:29:58.625960  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.626287  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.626338  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.642239  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0127 14:29:58.642768  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.643416  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.643445  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.643901  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.644142  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.646191  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.648095  618007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:29:58.648367  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0127 14:29:58.648707  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.649404  618007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:58.649430  618007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:29:58.649463  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.649503  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.649531  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.650223  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.650842  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.650889  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.652688  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653147  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.653172  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653365  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.653518  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.653764  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.653963  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.666548  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0127 14:29:58.666868  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.667294  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.667314  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.667561  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.667762  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.669489  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.669741  618007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:58.669755  618007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:29:58.669767  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.673157  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673667  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.673740  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673866  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.674035  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.674189  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.674352  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.812282  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:29:58.843820  618007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:29:59.006382  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:59.076837  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:59.439964  618007 node_ready.go:35] waiting up to 15m0s for node "flannel-418372" to be "Ready" ...
	I0127 14:29:59.440353  618007 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0127 14:29:59.897933  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897955  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897964  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.897979  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898296  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898314  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898325  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898333  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898451  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898464  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898472  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898480  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898484  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900207  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900218  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.900268  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900273  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900304  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911467  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.911486  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.911738  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.911762  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911766  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.913044  618007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 14:29:58.506345  619737 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 14:29:58.506539  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.506600  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.521777  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0127 14:29:58.522212  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.522764  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.522793  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.523225  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.523506  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:29:58.523719  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:29:58.523905  619737 start.go:159] libmachine.API.Create for "bridge-418372" (driver="kvm2")
	I0127 14:29:58.523931  619737 client.go:168] LocalClient.Create starting
	I0127 14:29:58.523959  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:29:58.523990  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524006  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524070  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:29:58.524089  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524100  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524128  619737 main.go:141] libmachine: Running pre-create checks...
	I0127 14:29:58.524137  619737 main.go:141] libmachine: (bridge-418372) Calling .PreCreateCheck
	I0127 14:29:58.524515  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:29:58.525026  619737 main.go:141] libmachine: Creating machine...
	I0127 14:29:58.525043  619737 main.go:141] libmachine: (bridge-418372) Calling .Create
	I0127 14:29:58.525197  619737 main.go:141] libmachine: (bridge-418372) creating KVM machine...
	I0127 14:29:58.525214  619737 main.go:141] libmachine: (bridge-418372) creating network...
	I0127 14:29:58.526633  619737 main.go:141] libmachine: (bridge-418372) DBG | found existing default KVM network
	I0127 14:29:58.528058  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.527875  619760 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:29:58.529143  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529064  619760 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:9f:16} reservation:<nil>}
	I0127 14:29:58.530053  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529980  619760 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:9b:c5} reservation:<nil>}
	I0127 14:29:58.531138  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.531066  619760 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027fa90}
	I0127 14:29:58.531168  619737 main.go:141] libmachine: (bridge-418372) DBG | created network xml: 
	I0127 14:29:58.531176  619737 main.go:141] libmachine: (bridge-418372) DBG | <network>
	I0127 14:29:58.531181  619737 main.go:141] libmachine: (bridge-418372) DBG |   <name>mk-bridge-418372</name>
	I0127 14:29:58.531190  619737 main.go:141] libmachine: (bridge-418372) DBG |   <dns enable='no'/>
	I0127 14:29:58.531197  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531211  619737 main.go:141] libmachine: (bridge-418372) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 14:29:58.531225  619737 main.go:141] libmachine: (bridge-418372) DBG |     <dhcp>
	I0127 14:29:58.531254  619737 main.go:141] libmachine: (bridge-418372) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 14:29:58.531276  619737 main.go:141] libmachine: (bridge-418372) DBG |     </dhcp>
	I0127 14:29:58.531285  619737 main.go:141] libmachine: (bridge-418372) DBG |   </ip>
	I0127 14:29:58.531292  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531300  619737 main.go:141] libmachine: (bridge-418372) DBG | </network>
	I0127 14:29:58.531309  619737 main.go:141] libmachine: (bridge-418372) DBG | 
	I0127 14:29:58.536042  619737 main.go:141] libmachine: (bridge-418372) DBG | trying to create private KVM network mk-bridge-418372 192.168.72.0/24...
	I0127 14:29:58.619397  619737 main.go:141] libmachine: (bridge-418372) DBG | private KVM network mk-bridge-418372 192.168.72.0/24 created
	I0127 14:29:58.619417  619737 main.go:141] libmachine: (bridge-418372) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:58.619428  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.619379  619760 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.619443  619737 main.go:141] libmachine: (bridge-418372) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:29:58.619522  619737 main.go:141] libmachine: (bridge-418372) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:29:58.924369  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.924221  619760 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa...
	I0127 14:29:59.184940  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184795  619760 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk...
	I0127 14:29:59.184993  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing magic tar header
	I0127 14:29:59.185009  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing SSH key tar header
	I0127 14:29:59.185032  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184949  619760 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:59.185152  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372
	I0127 14:29:59.185180  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:29:59.185194  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 (perms=drwx------)
	I0127 14:29:59.185214  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:29:59.185231  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:29:59.185244  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:29:59.185253  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:29:59.185264  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:59.185276  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:29:59.185287  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:29:59.185305  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:29:59.185319  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins
	I0127 14:29:59.185328  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.185342  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home
	I0127 14:29:59.185355  619737 main.go:141] libmachine: (bridge-418372) DBG | skipping /home - not owner
	I0127 14:29:59.186522  619737 main.go:141] libmachine: (bridge-418372) define libvirt domain using xml: 
	I0127 14:29:59.186545  619737 main.go:141] libmachine: (bridge-418372) <domain type='kvm'>
	I0127 14:29:59.186554  619737 main.go:141] libmachine: (bridge-418372)   <name>bridge-418372</name>
	I0127 14:29:59.186567  619737 main.go:141] libmachine: (bridge-418372)   <memory unit='MiB'>3072</memory>
	I0127 14:29:59.186606  619737 main.go:141] libmachine: (bridge-418372)   <vcpu>2</vcpu>
	I0127 14:29:59.186644  619737 main.go:141] libmachine: (bridge-418372)   <features>
	I0127 14:29:59.186658  619737 main.go:141] libmachine: (bridge-418372)     <acpi/>
	I0127 14:29:59.186668  619737 main.go:141] libmachine: (bridge-418372)     <apic/>
	I0127 14:29:59.186687  619737 main.go:141] libmachine: (bridge-418372)     <pae/>
	I0127 14:29:59.186697  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.186713  619737 main.go:141] libmachine: (bridge-418372)   </features>
	I0127 14:29:59.186724  619737 main.go:141] libmachine: (bridge-418372)   <cpu mode='host-passthrough'>
	I0127 14:29:59.186732  619737 main.go:141] libmachine: (bridge-418372)   
	I0127 14:29:59.186741  619737 main.go:141] libmachine: (bridge-418372)   </cpu>
	I0127 14:29:59.186749  619737 main.go:141] libmachine: (bridge-418372)   <os>
	I0127 14:29:59.186759  619737 main.go:141] libmachine: (bridge-418372)     <type>hvm</type>
	I0127 14:29:59.186771  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='cdrom'/>
	I0127 14:29:59.186781  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='hd'/>
	I0127 14:29:59.186791  619737 main.go:141] libmachine: (bridge-418372)     <bootmenu enable='no'/>
	I0127 14:29:59.186799  619737 main.go:141] libmachine: (bridge-418372)   </os>
	I0127 14:29:59.186807  619737 main.go:141] libmachine: (bridge-418372)   <devices>
	I0127 14:29:59.186816  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='cdrom'>
	I0127 14:29:59.186837  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/boot2docker.iso'/>
	I0127 14:29:59.186851  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hdc' bus='scsi'/>
	I0127 14:29:59.186860  619737 main.go:141] libmachine: (bridge-418372)       <readonly/>
	I0127 14:29:59.186869  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186884  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='disk'>
	I0127 14:29:59.186896  619737 main.go:141] libmachine: (bridge-418372)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:29:59.186909  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk'/>
	I0127 14:29:59.186919  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hda' bus='virtio'/>
	I0127 14:29:59.186925  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186931  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186939  619737 main.go:141] libmachine: (bridge-418372)       <source network='mk-bridge-418372'/>
	I0127 14:29:59.186945  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.186968  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.186980  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186989  619737 main.go:141] libmachine: (bridge-418372)       <source network='default'/>
	I0127 14:29:59.186999  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.187007  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.187016  619737 main.go:141] libmachine: (bridge-418372)     <serial type='pty'>
	I0127 14:29:59.187024  619737 main.go:141] libmachine: (bridge-418372)       <target port='0'/>
	I0127 14:29:59.187042  619737 main.go:141] libmachine: (bridge-418372)     </serial>
	I0127 14:29:59.187053  619737 main.go:141] libmachine: (bridge-418372)     <console type='pty'>
	I0127 14:29:59.187060  619737 main.go:141] libmachine: (bridge-418372)       <target type='serial' port='0'/>
	I0127 14:29:59.187070  619737 main.go:141] libmachine: (bridge-418372)     </console>
	I0127 14:29:59.187075  619737 main.go:141] libmachine: (bridge-418372)     <rng model='virtio'>
	I0127 14:29:59.187088  619737 main.go:141] libmachine: (bridge-418372)       <backend model='random'>/dev/random</backend>
	I0127 14:29:59.187099  619737 main.go:141] libmachine: (bridge-418372)     </rng>
	I0127 14:29:59.187109  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187115  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187127  619737 main.go:141] libmachine: (bridge-418372)   </devices>
	I0127 14:29:59.187133  619737 main.go:141] libmachine: (bridge-418372) </domain>
	I0127 14:29:59.187147  619737 main.go:141] libmachine: (bridge-418372) 
	I0127 14:29:59.192870  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:dc:94:4c in network default
	I0127 14:29:59.193459  619737 main.go:141] libmachine: (bridge-418372) starting domain...
	I0127 14:29:59.193498  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.193514  619737 main.go:141] libmachine: (bridge-418372) ensuring networks are active...
	I0127 14:29:59.194186  619737 main.go:141] libmachine: (bridge-418372) Ensuring network default is active
	I0127 14:29:59.194531  619737 main.go:141] libmachine: (bridge-418372) Ensuring network mk-bridge-418372 is active
	I0127 14:29:59.195173  619737 main.go:141] libmachine: (bridge-418372) getting domain XML...
	I0127 14:29:59.196009  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.603422  619737 main.go:141] libmachine: (bridge-418372) waiting for IP...
	I0127 14:29:59.604334  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.604867  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.604937  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.604872  619760 retry.go:31] will retry after 303.965936ms: waiting for domain to come up
	I0127 14:29:59.910634  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.911365  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.911395  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.911327  619760 retry.go:31] will retry after 241.006912ms: waiting for domain to come up
	I0127 14:30:00.153815  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.154372  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.154403  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.154354  619760 retry.go:31] will retry after 323.516048ms: waiting for domain to come up
	I0127 14:30:00.479917  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.480471  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.480490  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.480451  619760 retry.go:31] will retry after 577.842165ms: waiting for domain to come up
	I0127 14:30:01.059664  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.060181  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.060209  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.060153  619760 retry.go:31] will retry after 693.227243ms: waiting for domain to come up
	I0127 14:30:01.754699  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.755198  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.755231  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.755167  619760 retry.go:31] will retry after 601.644547ms: waiting for domain to come up
	I0127 14:30:02.358857  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:02.359425  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:02.359456  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:02.359398  619760 retry.go:31] will retry after 805.211831ms: waiting for domain to come up
	I0127 14:30:03.166329  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:03.166920  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:03.166954  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:03.166895  619760 retry.go:31] will retry after 1.344095834s: waiting for domain to come up
	I0127 14:29:59.914025  618007 addons.go:514] duration metric: took 1.313551088s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 14:29:59.948236  618007 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-418372" context rescaled to 1 replicas
	I0127 14:30:01.444005  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:04.513305  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:04.513804  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:04.513825  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:04.513785  619760 retry.go:31] will retry after 1.439144315s: waiting for domain to come up
	I0127 14:30:05.954624  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:05.955150  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:05.955180  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:05.955114  619760 retry.go:31] will retry after 1.897876702s: waiting for domain to come up
	I0127 14:30:07.854669  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:07.855304  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:07.855364  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:07.855289  619760 retry.go:31] will retry after 1.982634575s: waiting for domain to come up
	I0127 14:30:03.943205  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:05.944150  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:09.839318  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:09.839985  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:09.840015  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:09.839942  619760 retry.go:31] will retry after 3.383361388s: waiting for domain to come up
	I0127 14:30:13.226586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:13.227082  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:13.227161  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:13.227058  619760 retry.go:31] will retry after 3.076957623s: waiting for domain to come up
	I0127 14:30:08.444021  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:10.944599  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:16.306620  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:16.307278  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:16.307306  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:16.307257  619760 retry.go:31] will retry after 5.232439528s: waiting for domain to come up
	I0127 14:30:13.443330  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:15.943802  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.543562  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544125  619737 main.go:141] libmachine: (bridge-418372) found domain IP: 192.168.72.158
	I0127 14:30:21.544159  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has current primary IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544168  619737 main.go:141] libmachine: (bridge-418372) reserving static IP address...
	I0127 14:30:21.544584  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find host DHCP lease matching {name: "bridge-418372", mac: "52:54:00:34:a5:5b", ip: "192.168.72.158"} in network mk-bridge-418372
	I0127 14:30:21.620096  619737 main.go:141] libmachine: (bridge-418372) DBG | Getting to WaitForSSH function...
	I0127 14:30:21.620142  619737 main.go:141] libmachine: (bridge-418372) reserved static IP address 192.168.72.158 for domain bridge-418372
	I0127 14:30:21.620156  619737 main.go:141] libmachine: (bridge-418372) waiting for SSH...
	I0127 14:30:21.623062  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623569  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.623601  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623801  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH client type: external
	I0127 14:30:21.623826  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa (-rw-------)
	I0127 14:30:21.623865  619737 main.go:141] libmachine: (bridge-418372) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:30:21.623880  619737 main.go:141] libmachine: (bridge-418372) DBG | About to run SSH command:
	I0127 14:30:21.623915  619737 main.go:141] libmachine: (bridge-418372) DBG | exit 0
	I0127 14:30:21.749658  619737 main.go:141] libmachine: (bridge-418372) DBG | SSH cmd err, output: <nil>: 
	I0127 14:30:21.749918  619737 main.go:141] libmachine: (bridge-418372) KVM machine creation complete
	I0127 14:30:21.750400  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:21.750961  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751196  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751406  619737 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:30:21.751421  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:21.752834  619737 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:30:21.752851  619737 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:30:21.752859  619737 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:30:21.752883  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.755459  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.755886  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.755913  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.756091  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.756297  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756467  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756642  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.756809  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.757010  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.757020  619737 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:30:21.856846  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:21.856875  619737 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:30:21.856885  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.859711  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860096  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.860133  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860331  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.860555  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860723  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860912  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.861103  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.861357  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.861375  619737 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:30:21.966551  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:30:21.966638  619737 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:30:21.966653  619737 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:30:21.966663  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.966929  619737 buildroot.go:166] provisioning hostname "bridge-418372"
	I0127 14:30:21.966993  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.967184  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.969863  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970301  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.970330  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970473  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.970662  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970806  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970980  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.971184  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.971397  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.971411  619737 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-418372 && echo "bridge-418372" | sudo tee /etc/hostname
	I0127 14:30:22.088428  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-418372
	
	I0127 14:30:22.088472  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.091063  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091586  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.091611  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091821  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.092004  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092139  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092303  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.092514  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.092705  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.092732  619737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-418372' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-418372/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-418372' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:30:22.206493  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:22.206523  619737 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:30:22.206555  619737 buildroot.go:174] setting up certificates
	I0127 14:30:22.206570  619737 provision.go:84] configureAuth start
	I0127 14:30:22.206580  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:22.206870  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:22.209586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.209920  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.209959  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.210081  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.212164  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212510  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.212527  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212711  619737 provision.go:143] copyHostCerts
	I0127 14:30:22.212761  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:30:22.212785  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:30:22.212874  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:30:22.213016  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:30:22.213027  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:30:22.213064  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:30:22.213138  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:30:22.213146  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:30:22.213168  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:30:22.213230  619737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.bridge-418372 san=[127.0.0.1 192.168.72.158 bridge-418372 localhost minikube]
	I0127 14:30:22.548623  619737 provision.go:177] copyRemoteCerts
	I0127 14:30:22.548680  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:30:22.548706  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.551241  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551575  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.551604  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551796  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.552020  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.552246  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.552395  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:22.643890  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:30:22.670713  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:30:22.693627  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:30:22.717638  619737 provision.go:87] duration metric: took 511.05611ms to configureAuth
	I0127 14:30:22.717668  619737 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:30:22.717835  619737 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:22.717935  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.720466  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.720835  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.720865  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.721045  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.721238  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721385  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721514  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.721646  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.721822  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.721844  619737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:30:22.938113  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:30:22.938145  619737 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:30:22.938155  619737 main.go:141] libmachine: (bridge-418372) Calling .GetURL
	I0127 14:30:22.939593  619737 main.go:141] libmachine: (bridge-418372) DBG | using libvirt version 6000000
	I0127 14:30:22.942205  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942565  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.942607  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942749  619737 main.go:141] libmachine: Docker is up and running!
	I0127 14:30:22.942779  619737 main.go:141] libmachine: Reticulating splines...
	I0127 14:30:22.942791  619737 client.go:171] duration metric: took 24.418851853s to LocalClient.Create
	I0127 14:30:22.942815  619737 start.go:167] duration metric: took 24.418910733s to libmachine.API.Create "bridge-418372"
	I0127 14:30:22.942825  619737 start.go:293] postStartSetup for "bridge-418372" (driver="kvm2")
	I0127 14:30:22.942834  619737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:30:22.942854  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:22.943081  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:30:22.943104  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.945274  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945649  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.945678  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945844  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.946014  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.946145  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.946279  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.027435  619737 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:30:23.031408  619737 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:30:23.031432  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:30:23.031490  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:30:23.031589  619737 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:30:23.031684  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:30:23.041098  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:23.064771  619737 start.go:296] duration metric: took 121.935009ms for postStartSetup
	I0127 14:30:23.064822  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:23.065340  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.068126  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068566  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.068585  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068850  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:30:23.069082  619737 start.go:128] duration metric: took 24.564244155s to createHost
	I0127 14:30:23.069112  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.071565  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.071930  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.071958  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.072093  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.072294  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072485  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072602  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.072779  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:23.072928  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:23.072937  619737 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:30:23.173863  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737988223.150041878
	
	I0127 14:30:23.173884  619737 fix.go:216] guest clock: 1737988223.150041878
	I0127 14:30:23.173890  619737 fix.go:229] Guest: 2025-01-27 14:30:23.150041878 +0000 UTC Remote: 2025-01-27 14:30:23.069097778 +0000 UTC m=+24.679552593 (delta=80.9441ms)
	I0127 14:30:23.173936  619737 fix.go:200] guest clock delta is within tolerance: 80.9441ms
	I0127 14:30:23.173948  619737 start.go:83] releasing machines lock for "bridge-418372", held for 24.669221959s
	I0127 14:30:23.173973  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.174207  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.176840  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177209  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.177240  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177413  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.177905  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178089  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178172  619737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:30:23.178218  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.178318  619737 ssh_runner.go:195] Run: cat /version.json
	I0127 14:30:23.178350  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.181082  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181120  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181443  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181470  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181496  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181513  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181567  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181734  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.181816  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181907  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.181974  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.182052  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.182110  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.182209  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.254783  619737 ssh_runner.go:195] Run: systemctl --version
	I0127 14:30:23.277936  619737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:30:18.443736  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:20.942676  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.452564  618007 node_ready.go:49] node "flannel-418372" has status "Ready":"True"
	I0127 14:30:21.452591  618007 node_ready.go:38] duration metric: took 22.012579891s for node "flannel-418372" to be "Ready" ...
	I0127 14:30:21.452602  618007 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:21.461767  618007 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:23.436466  619737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:30:23.443141  619737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:30:23.443197  619737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:30:23.460545  619737 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:30:23.460567  619737 start.go:495] detecting cgroup driver to use...
	I0127 14:30:23.460628  619737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:30:23.479133  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:30:23.494546  619737 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:30:23.494614  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:30:23.508408  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:30:23.521348  619737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:30:23.635456  619737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:30:23.765321  619737 docker.go:233] disabling docker service ...
	I0127 14:30:23.765393  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:30:23.778859  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:30:23.790920  619737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:30:23.924634  619737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:30:24.053414  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:30:24.066957  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:30:24.085971  619737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:30:24.086040  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.096202  619737 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:30:24.096256  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.106388  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.116650  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.127369  619737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:30:24.137556  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.147564  619737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.166019  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.176231  619737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:30:24.185246  619737 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:30:24.185296  619737 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:30:24.198571  619737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:30:24.207701  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:24.326803  619737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:30:24.416087  619737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:30:24.416166  619737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:30:24.421135  619737 start.go:563] Will wait 60s for crictl version
	I0127 14:30:24.421191  619737 ssh_runner.go:195] Run: which crictl
	I0127 14:30:24.425096  619737 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:30:24.467553  619737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:30:24.467656  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.494858  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.523951  619737 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:30:24.525015  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:24.527690  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528062  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:24.528102  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528378  619737 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:30:24.532290  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:24.545520  619737 kubeadm.go:883] updating cluster {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:30:24.545653  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:30:24.545722  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:24.578117  619737 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:30:24.578183  619737 ssh_runner.go:195] Run: which lz4
	I0127 14:30:24.581940  619737 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:30:24.585899  619737 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:30:24.585926  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:30:26.046393  619737 crio.go:462] duration metric: took 1.464480043s to copy over tarball
	I0127 14:30:26.046476  619737 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:30:28.286060  619737 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.239526518s)
	I0127 14:30:28.286090  619737 crio.go:469] duration metric: took 2.239666444s to extract the tarball
	I0127 14:30:28.286098  619737 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:30:28.329925  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:28.372463  619737 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:30:28.372493  619737 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:30:28.372506  619737 kubeadm.go:934] updating node { 192.168.72.158 8443 v1.32.1 crio true true} ...
	I0127 14:30:28.372639  619737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-418372 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 14:30:28.372730  619737 ssh_runner.go:195] Run: crio config
	I0127 14:30:23.469182  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:25.470378  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:27.969278  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:28.431389  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:30:28.431419  619737 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:30:28.431445  619737 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-418372 NodeName:bridge-418372 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:30:28.431596  619737 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-418372"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:30:28.431664  619737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:30:28.443712  619737 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:30:28.443775  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:30:28.453106  619737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:30:28.472323  619737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:30:28.488568  619737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:30:28.505501  619737 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0127 14:30:28.509628  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:28.522026  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:28.644859  619737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:30:28.660903  619737 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372 for IP: 192.168.72.158
	I0127 14:30:28.660924  619737 certs.go:194] generating shared ca certs ...
	I0127 14:30:28.660945  619737 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.661145  619737 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:30:28.661204  619737 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:30:28.661218  619737 certs.go:256] generating profile certs ...
	I0127 14:30:28.661295  619737 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key
	I0127 14:30:28.661316  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt with IP's: []
	I0127 14:30:28.906551  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt ...
	I0127 14:30:28.906578  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: {Name:mk1e2537950485aa8b4f79c1832edd87a69fac76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906770  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key ...
	I0127 14:30:28.906787  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key: {Name:mkefc91979c182951e8440280201021e6feaf0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906903  619737 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b
	I0127 14:30:28.906926  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.158]
	I0127 14:30:29.091201  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b ...
	I0127 14:30:29.091235  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b: {Name:mkd8eb8b7ce81ecb1ea18b8612606f856d364bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091400  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b ...
	I0127 14:30:29.091415  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b: {Name:mk69a1ca35d981f975238e5836687217bd190f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091489  619737 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt
	I0127 14:30:29.091560  619737 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key
	I0127 14:30:29.091639  619737 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key
	I0127 14:30:29.091657  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt with IP's: []
	I0127 14:30:29.149860  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt ...
	I0127 14:30:29.149879  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt: {Name:mk7035d438a8cb1c492fb958853882394afbe27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.149993  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key ...
	I0127 14:30:29.150004  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key: {Name:mka8c6fd9acdaec459c9ef3e4dfbb4b5c5547317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.150161  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:30:29.150202  619737 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:30:29.150212  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:30:29.150232  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:30:29.150253  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:30:29.150272  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:30:29.150313  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:29.150944  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:30:29.175883  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:30:29.199205  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:30:29.222754  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:30:29.245909  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:30:29.269824  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:30:29.292470  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:30:29.315043  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:30:29.354655  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:30:29.383756  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:30:29.416181  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:30:29.439715  619737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:30:29.456721  619737 ssh_runner.go:195] Run: openssl version
	I0127 14:30:29.464239  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:30:29.475723  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480470  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480515  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.486322  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:30:29.496846  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:30:29.507085  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511703  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511754  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.517449  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:30:29.527666  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:30:29.540074  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544916  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544955  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.551000  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:30:29.562167  619737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:30:29.566616  619737 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:30:29.566681  619737 kubeadm.go:392] StartCluster: {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:30:29.566758  619737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:30:29.566808  619737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:30:29.609003  619737 cri.go:89] found id: ""
	I0127 14:30:29.609076  619737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:30:29.618951  619737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:30:29.628562  619737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:30:29.637724  619737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:30:29.637742  619737 kubeadm.go:157] found existing configuration files:
	
	I0127 14:30:29.637782  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:30:29.648947  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:30:29.648987  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:30:29.657991  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:30:29.666526  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:30:29.666559  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:30:29.676483  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.685024  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:30:29.685073  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.693937  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:30:29.702972  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:30:29.703020  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:30:29.712304  619737 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:30:29.774803  619737 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:30:29.774988  619737 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:30:29.875816  619737 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:30:29.875979  619737 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:30:29.876114  619737 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:30:29.888173  619737 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:30:29.945220  619737 out.go:235]   - Generating certificates and keys ...
	I0127 14:30:29.945359  619737 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:30:29.945448  619737 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:30:30.158542  619737 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:30:30.651792  619737 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:30:30.728655  619737 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:30:30.849544  619737 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:30:31.081949  619737 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:30:31.082098  619737 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.339755  619737 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:30:31.339980  619737 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.556885  619737 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:30:31.958984  619737 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:30:32.398271  619737 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:30:32.398452  619737 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:30:32.525025  619737 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:30:32.699085  619737 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:30:33.067374  619737 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:30:33.229761  619737 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:30:30.074789  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:32.468447  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:33.740325  619737 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:30:33.741768  619737 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:30:33.745759  619737 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:30:34.472510  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:35.969131  618007 pod_ready.go:93] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.969163  618007 pod_ready.go:82] duration metric: took 14.507366859s for pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.969178  618007 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.974351  618007 pod_ready.go:93] pod "etcd-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.974376  618007 pod_ready.go:82] duration metric: took 5.188773ms for pod "etcd-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.974389  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.979590  618007 pod_ready.go:93] pod "kube-apiserver-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.979610  618007 pod_ready.go:82] duration metric: took 5.212396ms for pod "kube-apiserver-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.979623  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.984005  618007 pod_ready.go:93] pod "kube-controller-manager-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.984026  618007 pod_ready.go:82] duration metric: took 4.395194ms for pod "kube-controller-manager-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.984035  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5gszq" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.988140  618007 pod_ready.go:93] pod "kube-proxy-5gszq" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.988163  618007 pod_ready.go:82] duration metric: took 4.120445ms for pod "kube-proxy-5gszq" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.988179  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:36.366430  618007 pod_ready.go:93] pod "kube-scheduler-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:36.366453  618007 pod_ready.go:82] duration metric: took 378.266563ms for pod "kube-scheduler-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:36.366464  618007 pod_ready.go:39] duration metric: took 14.913850556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:36.366482  618007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:30:36.366541  618007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:30:36.387742  618007 api_server.go:72] duration metric: took 37.787293582s to wait for apiserver process to appear ...
	I0127 14:30:36.387769  618007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:30:36.387798  618007 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I0127 14:30:36.394095  618007 api_server.go:279] https://192.168.50.236:8443/healthz returned 200:
	ok
	I0127 14:30:36.395090  618007 api_server.go:141] control plane version: v1.32.1
	I0127 14:30:36.395112  618007 api_server.go:131] duration metric: took 7.335974ms to wait for apiserver health ...
	I0127 14:30:36.395120  618007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:30:36.572676  618007 system_pods.go:59] 7 kube-system pods found
	I0127 14:30:36.572713  618007 system_pods.go:61] "coredns-668d6bf9bc-jnmf4" [c977d232-5060-4bb7-8a11-1834ac61ef70] Running
	I0127 14:30:36.572722  618007 system_pods.go:61] "etcd-flannel-418372" [b6786b38-1937-4cfb-8a7b-d27847d7c390] Running
	I0127 14:30:36.572732  618007 system_pods.go:61] "kube-apiserver-flannel-418372" [94d4d209-0533-4c6d-92fc-5de7f59a5ca5] Running
	I0127 14:30:36.572739  618007 system_pods.go:61] "kube-controller-manager-flannel-418372" [c09eb55e-c216-472e-bec3-74d7bdd0d915] Running
	I0127 14:30:36.572747  618007 system_pods.go:61] "kube-proxy-5gszq" [11888572-b936-4c6b-99f3-8469d40359e5] Running
	I0127 14:30:36.572752  618007 system_pods.go:61] "kube-scheduler-flannel-418372" [8954ced6-4dda-4e4e-bcfc-19caef64932d] Running
	I0127 14:30:36.572757  618007 system_pods.go:61] "storage-provisioner" [f1193abf-2fe5-4e06-a829-d9b51a5cd773] Running
	I0127 14:30:36.572767  618007 system_pods.go:74] duration metric: took 177.638734ms to wait for pod list to return data ...
	I0127 14:30:36.572777  618007 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:30:36.767343  618007 default_sa.go:45] found service account: "default"
	I0127 14:30:36.767380  618007 default_sa.go:55] duration metric: took 194.588661ms for default service account to be created ...
	I0127 14:30:36.767392  618007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:30:36.972476  618007 system_pods.go:87] 7 kube-system pods found
	I0127 14:30:37.166823  618007 system_pods.go:105] "coredns-668d6bf9bc-jnmf4" [c977d232-5060-4bb7-8a11-1834ac61ef70] Running
	I0127 14:30:37.166851  618007 system_pods.go:105] "etcd-flannel-418372" [b6786b38-1937-4cfb-8a7b-d27847d7c390] Running
	I0127 14:30:37.166858  618007 system_pods.go:105] "kube-apiserver-flannel-418372" [94d4d209-0533-4c6d-92fc-5de7f59a5ca5] Running
	I0127 14:30:37.166866  618007 system_pods.go:105] "kube-controller-manager-flannel-418372" [c09eb55e-c216-472e-bec3-74d7bdd0d915] Running
	I0127 14:30:37.166873  618007 system_pods.go:105] "kube-proxy-5gszq" [11888572-b936-4c6b-99f3-8469d40359e5] Running
	I0127 14:30:37.166880  618007 system_pods.go:105] "kube-scheduler-flannel-418372" [8954ced6-4dda-4e4e-bcfc-19caef64932d] Running
	I0127 14:30:37.166887  618007 system_pods.go:105] "storage-provisioner" [f1193abf-2fe5-4e06-a829-d9b51a5cd773] Running
	I0127 14:30:37.166898  618007 system_pods.go:147] duration metric: took 399.497203ms to wait for k8s-apps to be running ...
	I0127 14:30:37.166907  618007 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:30:37.166960  618007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:37.184544  618007 system_svc.go:56] duration metric: took 17.628067ms WaitForService to wait for kubelet
	I0127 14:30:37.184580  618007 kubeadm.go:582] duration metric: took 38.584133747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:30:37.184603  618007 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:30:37.366971  618007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:30:37.367010  618007 node_conditions.go:123] node cpu capacity is 2
	I0127 14:30:37.367029  618007 node_conditions.go:105] duration metric: took 182.419341ms to run NodePressure ...
	I0127 14:30:37.367045  618007 start.go:241] waiting for startup goroutines ...
	I0127 14:30:37.367054  618007 start.go:246] waiting for cluster config update ...
	I0127 14:30:37.367071  618007 start.go:255] writing updated cluster config ...
	I0127 14:30:37.367429  618007 ssh_runner.go:195] Run: rm -f paused
	I0127 14:30:37.421409  618007 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:30:37.423206  618007 out.go:177] * Done! kubectl is now configured to use "flannel-418372" cluster and "default" namespace by default
	I0127 14:30:33.812497  619737 out.go:235]   - Booting up control plane ...
	I0127 14:30:33.812717  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:30:33.812863  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:30:33.812961  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:30:33.813094  619737 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:30:33.813279  619737 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:30:33.813350  619737 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:30:33.921105  619737 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:30:33.921239  619737 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:30:34.923796  619737 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003590789s
	I0127 14:30:34.923910  619737 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:30:39.924192  619737 kubeadm.go:310] [api-check] The API server is healthy after 5.001293699s
	I0127 14:30:39.935144  619737 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:30:39.958823  619737 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:30:39.996057  619737 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:30:39.996312  619737 kubeadm.go:310] [mark-control-plane] Marking the node bridge-418372 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:30:40.010139  619737 kubeadm.go:310] [bootstrap-token] Using token: r7ccxo.kgv6nq8qhg7ecp3z
	I0127 14:30:40.011473  619737 out.go:235]   - Configuring RBAC rules ...
	I0127 14:30:40.011597  619737 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:30:40.020901  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:30:40.029801  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:30:40.033037  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:30:40.036413  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:30:40.039570  619737 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:30:40.328924  619737 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:30:40.747593  619737 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:30:41.328255  619737 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:30:41.329246  619737 kubeadm.go:310] 
	I0127 14:30:41.329310  619737 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:30:41.329319  619737 kubeadm.go:310] 
	I0127 14:30:41.329399  619737 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:30:41.329407  619737 kubeadm.go:310] 
	I0127 14:30:41.329428  619737 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:30:41.329482  619737 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:30:41.329526  619737 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:30:41.329555  619737 kubeadm.go:310] 
	I0127 14:30:41.329655  619737 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:30:41.329665  619737 kubeadm.go:310] 
	I0127 14:30:41.329745  619737 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:30:41.329766  619737 kubeadm.go:310] 
	I0127 14:30:41.329851  619737 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:30:41.329954  619737 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:30:41.330056  619737 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:30:41.330071  619737 kubeadm.go:310] 
	I0127 14:30:41.330176  619737 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:30:41.330296  619737 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:30:41.330314  619737 kubeadm.go:310] 
	I0127 14:30:41.330417  619737 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7ccxo.kgv6nq8qhg7ecp3z \
	I0127 14:30:41.330548  619737 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 \
	I0127 14:30:41.330576  619737 kubeadm.go:310] 	--control-plane 
	I0127 14:30:41.330582  619737 kubeadm.go:310] 
	I0127 14:30:41.330649  619737 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:30:41.330656  619737 kubeadm.go:310] 
	I0127 14:30:41.330721  619737 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7ccxo.kgv6nq8qhg7ecp3z \
	I0127 14:30:41.330803  619737 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 
	I0127 14:30:41.331862  619737 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:30:41.331884  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:30:41.333393  619737 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:30:41.334528  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:30:41.347863  619737 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:30:41.370602  619737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:30:41.370705  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:41.370715  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-418372 minikube.k8s.io/updated_at=2025_01_27T14_30_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=bridge-418372 minikube.k8s.io/primary=true
	I0127 14:30:41.535021  619737 ops.go:34] apiserver oom_adj: -16
	I0127 14:30:41.535150  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:42.035857  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:42.535777  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:43.035364  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:43.535454  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.035873  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.535827  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.617399  619737 kubeadm.go:1113] duration metric: took 3.24676419s to wait for elevateKubeSystemPrivileges
	I0127 14:30:44.617441  619737 kubeadm.go:394] duration metric: took 15.050776308s to StartCluster
	I0127 14:30:44.617463  619737 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:44.617560  619737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:30:44.620051  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:44.620334  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:30:44.620353  619737 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:30:44.620428  619737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:30:44.620531  619737 addons.go:69] Setting storage-provisioner=true in profile "bridge-418372"
	I0127 14:30:44.620558  619737 addons.go:238] Setting addon storage-provisioner=true in "bridge-418372"
	I0127 14:30:44.620565  619737 addons.go:69] Setting default-storageclass=true in profile "bridge-418372"
	I0127 14:30:44.620590  619737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-418372"
	I0127 14:30:44.620600  619737 host.go:66] Checking if "bridge-418372" exists ...
	I0127 14:30:44.620555  619737 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:44.621004  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.621004  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.621053  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.621060  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.622030  619737 out.go:177] * Verifying Kubernetes components...
	I0127 14:30:44.623413  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:44.638348  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0127 14:30:44.638411  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0127 14:30:44.638914  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.638980  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.639484  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.639505  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.639683  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.639713  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.639863  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.640136  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.640333  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.640456  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.640501  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.644089  619737 addons.go:238] Setting addon default-storageclass=true in "bridge-418372"
	I0127 14:30:44.644125  619737 host.go:66] Checking if "bridge-418372" exists ...
	I0127 14:30:44.644404  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.644446  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.659927  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0127 14:30:44.660334  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.660485  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0127 14:30:44.660844  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.660864  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.660884  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.661227  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.661372  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.661395  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.661697  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.661858  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.661873  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.661898  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.663597  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:44.665488  619737 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:30:44.666780  619737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:30:44.666804  619737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:30:44.666825  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:44.672578  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.673044  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:44.673131  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.673270  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:44.673488  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:44.673671  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:44.673816  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:44.681682  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0127 14:30:44.682285  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.682855  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.682881  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.683214  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.683429  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.685036  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:44.685243  619737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:30:44.685260  619737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:30:44.685278  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:44.688145  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.688619  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:44.688643  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.688793  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:44.688984  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:44.689180  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:44.689327  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:44.781712  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:30:44.825946  619737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:30:44.954327  619737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:30:44.981928  619737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:30:45.193916  619737 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 14:30:45.195924  619737 node_ready.go:35] waiting up to 15m0s for node "bridge-418372" to be "Ready" ...
	I0127 14:30:45.209959  619737 node_ready.go:49] node "bridge-418372" has status "Ready":"True"
	I0127 14:30:45.209983  619737 node_ready.go:38] duration metric: took 14.022807ms for node "bridge-418372" to be "Ready" ...
	I0127 14:30:45.209994  619737 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:45.230141  619737 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:45.248044  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.248072  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.248403  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.248459  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.248473  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.248482  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.248433  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.248748  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.248801  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.248762  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.254357  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.254379  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.254623  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.254643  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582068  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.582101  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.582471  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.582507  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.582518  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582563  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.582576  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.582914  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.582964  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582961  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.584401  619737 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 14:30:45.585530  619737 addons.go:514] duration metric: took 965.103449ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 14:30:45.700598  619737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-418372" context rescaled to 1 replicas
	I0127 14:30:46.237276  619737 pod_ready.go:93] pod "etcd-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:46.237378  619737 pod_ready.go:82] duration metric: took 1.007212145s for pod "etcd-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:46.237408  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:48.312592  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:50.743934  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:52.744428  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:54.244761  619737 pod_ready.go:93] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.244788  619737 pod_ready.go:82] duration metric: took 8.007362536s for pod "kube-apiserver-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.244803  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.249656  619737 pod_ready.go:93] pod "kube-controller-manager-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.249672  619737 pod_ready.go:82] duration metric: took 4.861469ms for pod "kube-controller-manager-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.249681  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-srq4p" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.254195  619737 pod_ready.go:93] pod "kube-proxy-srq4p" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.254210  619737 pod_ready.go:82] duration metric: took 4.523332ms for pod "kube-proxy-srq4p" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.254218  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.258549  619737 pod_ready.go:93] pod "kube-scheduler-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.258563  619737 pod_ready.go:82] duration metric: took 4.340039ms for pod "kube-scheduler-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.258569  619737 pod_ready.go:39] duration metric: took 9.048563243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:54.258586  619737 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:30:54.258635  619737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:30:54.275369  619737 api_server.go:72] duration metric: took 9.654981576s to wait for apiserver process to appear ...
	I0127 14:30:54.275386  619737 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:30:54.275399  619737 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8443/healthz ...
	I0127 14:30:54.279770  619737 api_server.go:279] https://192.168.72.158:8443/healthz returned 200:
	ok
	I0127 14:30:54.280702  619737 api_server.go:141] control plane version: v1.32.1
	I0127 14:30:54.280724  619737 api_server.go:131] duration metric: took 5.331614ms to wait for apiserver health ...
	I0127 14:30:54.280731  619737 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:30:54.284562  619737 system_pods.go:59] 7 kube-system pods found
	I0127 14:30:54.284585  619737 system_pods.go:61] "coredns-668d6bf9bc-bxt2d" [30688a6a-decf-494a-892c-246d5fd4ae17] Running
	I0127 14:30:54.284591  619737 system_pods.go:61] "etcd-bridge-418372" [2c893afa-1f78-4889-9a64-8e6976949658] Running
	I0127 14:30:54.284595  619737 system_pods.go:61] "kube-apiserver-bridge-418372" [e70ad4b0-21ca-4833-b5f6-46fe9d39dbad] Running
	I0127 14:30:54.284599  619737 system_pods.go:61] "kube-controller-manager-bridge-418372" [e2272719-2527-4148-a4f0-13395e47ee74] Running
	I0127 14:30:54.284602  619737 system_pods.go:61] "kube-proxy-srq4p" [bbca3a8d-4a8a-474b-b117-77557ced6ccb] Running
	I0127 14:30:54.284606  619737 system_pods.go:61] "kube-scheduler-bridge-418372" [8cd36bf9-5b97-4ee8-871c-5d15211c4106] Running
	I0127 14:30:54.284609  619737 system_pods.go:61] "storage-provisioner" [69dac337-57c8-495b-9c4c-9f6d81adccaf] Running
	I0127 14:30:54.284615  619737 system_pods.go:74] duration metric: took 3.878571ms to wait for pod list to return data ...
	I0127 14:30:54.284624  619737 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:30:54.286667  619737 default_sa.go:45] found service account: "default"
	I0127 14:30:54.286687  619737 default_sa.go:55] duration metric: took 2.056793ms for default service account to be created ...
	I0127 14:30:54.286697  619737 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:30:54.447492  619737 system_pods.go:87] 7 kube-system pods found
	I0127 14:30:54.642792  619737 system_pods.go:105] "coredns-668d6bf9bc-bxt2d" [30688a6a-decf-494a-892c-246d5fd4ae17] Running
	I0127 14:30:54.642812  619737 system_pods.go:105] "etcd-bridge-418372" [2c893afa-1f78-4889-9a64-8e6976949658] Running
	I0127 14:30:54.642816  619737 system_pods.go:105] "kube-apiserver-bridge-418372" [e70ad4b0-21ca-4833-b5f6-46fe9d39dbad] Running
	I0127 14:30:54.642821  619737 system_pods.go:105] "kube-controller-manager-bridge-418372" [e2272719-2527-4148-a4f0-13395e47ee74] Running
	I0127 14:30:54.642826  619737 system_pods.go:105] "kube-proxy-srq4p" [bbca3a8d-4a8a-474b-b117-77557ced6ccb] Running
	I0127 14:30:54.642830  619737 system_pods.go:105] "kube-scheduler-bridge-418372" [8cd36bf9-5b97-4ee8-871c-5d15211c4106] Running
	I0127 14:30:54.642835  619737 system_pods.go:105] "storage-provisioner" [69dac337-57c8-495b-9c4c-9f6d81adccaf] Running
	I0127 14:30:54.642842  619737 system_pods.go:147] duration metric: took 356.138334ms to wait for k8s-apps to be running ...
	I0127 14:30:54.642848  619737 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:30:54.642892  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:54.663960  619737 system_svc.go:56] duration metric: took 21.1006ms WaitForService to wait for kubelet
	I0127 14:30:54.663982  619737 kubeadm.go:582] duration metric: took 10.043596268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:30:54.664012  619737 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:30:54.842788  619737 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:30:54.842821  619737 node_conditions.go:123] node cpu capacity is 2
	I0127 14:30:54.842841  619737 node_conditions.go:105] duration metric: took 178.823111ms to run NodePressure ...
	I0127 14:30:54.842855  619737 start.go:241] waiting for startup goroutines ...
	I0127 14:30:54.842864  619737 start.go:246] waiting for cluster config update ...
	I0127 14:30:54.842879  619737 start.go:255] writing updated cluster config ...
	I0127 14:30:54.843155  619737 ssh_runner.go:195] Run: rm -f paused
	I0127 14:30:54.893663  619737 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:30:54.895559  619737 out.go:177] * Done! kubectl is now configured to use "bridge-418372" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.951272911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988704951252082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4562632-d5fd-4266-853d-202f8bb15d7c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.951914767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5f468eb-9095-4e01-b133-7b349f70739f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.952011615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5f468eb-9095-4e01-b133-7b349f70739f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.952236790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7,PodSandboxId:532c76fe5261dc3841192ba717e090b62ae017859157b3d76681a291619f8f0f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737988670329307218,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-frtmc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6502e9e7-e803-4a71-a2e8-b4d25b78f0e6,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:479a2cfcd7cdcb7fad6c90d77e3a78973366118e0adc2dc99e9d8e7ed9aba774,PodSandboxId:10c9ea3327022774db5752c8b0ef734700dc6d6cdb644b106d2c877d11ae11fc,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987395968593737,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s92rm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: a42125a5-7f23-4491-a1aa-55656a4294d6,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b7aa936c0cc0c9f23ddf231819b60b04d804dcab7770f754d6e9e80249ce8,PodSandboxId:0714025063b865a6c3a44df67ca6a266d55215ae2a7d0a8d75f344721e07facb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389185326110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kc8bv,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7817b17a-8213-42da-8957-5d97c8df5059,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24fc2cf06e2f8467fe448f42f3ceaf3537880e15c5c9bb2878a4834bb78b79a,PodSandboxId:7e6537c678af72b7c52bcb865f5f1e484a5f23fe7a6fb6e7452ff2aedba17b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c1
3f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389118251315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hmkdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4283df2-0988-4342-9de1-896ac5a40d86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e428099be9f4a49f9a0e5a8bcba8cf32bd082326832da61bd203115c39bd79,PodSandboxId:b97595149bbf34cff1fbb0a809e0355ba577d977e64bfe0374d935bcaef5d1cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,
},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987389010088254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b842b41e-ceeb-4132-bf70-2443e4c27ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ed8cc931135411e5c74b9ca21a06ad6b696e590ce72dfae372b744b62a1750,PodSandboxId:e3d8c2c589f35487f3a8755191fcb608e2e40281a12f47f69282936d8dcc2aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987388141316400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvbtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e2d7ac-5dd4-45f1-957d-9189f9d6a607,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c17ad02c6b15ba57a3a0b77c5c0582cc81e2ec5f555e9470622fa25e42bd92,PodSandboxId:ede47c4ca4048693c2aad5d91abdcd47a6e0378b7c7e9db65a8c39ddb48a5789,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737987377188530909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d77331e7f8387607d5dedf89c3f86f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c4052d759b7a63010f97d37c7961e3cffaabe08ee1ec787d91c51d56154daf,PodSandboxId:92f9ae5c2ab3713b7f21156e22bedee39d4170f46569dcc92dcfe1e466d80d07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987377114470254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b906771aab30792b3a19fdfdb346306,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f65f7a1e4f15dec4c3466b8a477d659b39d5a1db12bf497b7022aee8b15d0da,PodSandboxId:243559db57171e96ce401a41a47921c6b7694d727f1cc2a29fce80ed2511cb93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737987377154422515,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2842e5de646cebae6707739ec868992c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb869ec5b25b20bca73dbc599efa2b584e4ba18dd19d065845f15c6a02f434d,PodSandboxId:da65325e3e4cc9d30d0f6b3c7d133a8903b9977ef4b94f46b5bac69a7b25ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987377105925665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076907ea17b729b442b83ec0e75e2cc06e4a8200063356baf59df8f0e260e273,PodSandboxId:da8669732c84e64dca8582539936e09cab03c5b9941862635999fcc398f494bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987089711356526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5f468eb-9095-4e01-b133-7b349f70739f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.985589305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f8a2815-3a42-4aaf-acb7-cac601330a65 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.985644749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f8a2815-3a42-4aaf-acb7-cac601330a65 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.986990274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d9ac962-aaea-4a66-8fb1-5db984d6373b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.987386048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988704987370810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d9ac962-aaea-4a66-8fb1-5db984d6373b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.988114013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ac9c0e8-d742-4950-9de4-c48d79d1d80e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.988162446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ac9c0e8-d742-4950-9de4-c48d79d1d80e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:24 embed-certs-742142 crio[724]: time="2025-01-27 14:38:24.988390113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7,PodSandboxId:532c76fe5261dc3841192ba717e090b62ae017859157b3d76681a291619f8f0f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737988670329307218,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-frtmc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6502e9e7-e803-4a71-a2e8-b4d25b78f0e6,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:479a2cfcd7cdcb7fad6c90d77e3a78973366118e0adc2dc99e9d8e7ed9aba774,PodSandboxId:10c9ea3327022774db5752c8b0ef734700dc6d6cdb644b106d2c877d11ae11fc,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987395968593737,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s92rm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: a42125a5-7f23-4491-a1aa-55656a4294d6,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b7aa936c0cc0c9f23ddf231819b60b04d804dcab7770f754d6e9e80249ce8,PodSandboxId:0714025063b865a6c3a44df67ca6a266d55215ae2a7d0a8d75f344721e07facb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389185326110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kc8bv,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7817b17a-8213-42da-8957-5d97c8df5059,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24fc2cf06e2f8467fe448f42f3ceaf3537880e15c5c9bb2878a4834bb78b79a,PodSandboxId:7e6537c678af72b7c52bcb865f5f1e484a5f23fe7a6fb6e7452ff2aedba17b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c1
3f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389118251315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hmkdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4283df2-0988-4342-9de1-896ac5a40d86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e428099be9f4a49f9a0e5a8bcba8cf32bd082326832da61bd203115c39bd79,PodSandboxId:b97595149bbf34cff1fbb0a809e0355ba577d977e64bfe0374d935bcaef5d1cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,
},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987389010088254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b842b41e-ceeb-4132-bf70-2443e4c27ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ed8cc931135411e5c74b9ca21a06ad6b696e590ce72dfae372b744b62a1750,PodSandboxId:e3d8c2c589f35487f3a8755191fcb608e2e40281a12f47f69282936d8dcc2aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987388141316400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvbtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e2d7ac-5dd4-45f1-957d-9189f9d6a607,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c17ad02c6b15ba57a3a0b77c5c0582cc81e2ec5f555e9470622fa25e42bd92,PodSandboxId:ede47c4ca4048693c2aad5d91abdcd47a6e0378b7c7e9db65a8c39ddb48a5789,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737987377188530909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d77331e7f8387607d5dedf89c3f86f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c4052d759b7a63010f97d37c7961e3cffaabe08ee1ec787d91c51d56154daf,PodSandboxId:92f9ae5c2ab3713b7f21156e22bedee39d4170f46569dcc92dcfe1e466d80d07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987377114470254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b906771aab30792b3a19fdfdb346306,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f65f7a1e4f15dec4c3466b8a477d659b39d5a1db12bf497b7022aee8b15d0da,PodSandboxId:243559db57171e96ce401a41a47921c6b7694d727f1cc2a29fce80ed2511cb93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737987377154422515,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2842e5de646cebae6707739ec868992c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb869ec5b25b20bca73dbc599efa2b584e4ba18dd19d065845f15c6a02f434d,PodSandboxId:da65325e3e4cc9d30d0f6b3c7d133a8903b9977ef4b94f46b5bac69a7b25ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987377105925665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076907ea17b729b442b83ec0e75e2cc06e4a8200063356baf59df8f0e260e273,PodSandboxId:da8669732c84e64dca8582539936e09cab03c5b9941862635999fcc398f494bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987089711356526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ac9c0e8-d742-4950-9de4-c48d79d1d80e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.018297775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7418f9f-6313-42a2-b5fb-7e773a6d6249 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.018345632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7418f9f-6313-42a2-b5fb-7e773a6d6249 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.020113320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=318fd728-5b17-41c3-a806-a5381c03a02a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.021190439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988705021027776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=318fd728-5b17-41c3-a806-a5381c03a02a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.024662282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8036b438-638a-4b8f-b6a2-fb94edf5d334 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.024724454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8036b438-638a-4b8f-b6a2-fb94edf5d334 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.025057199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7,PodSandboxId:532c76fe5261dc3841192ba717e090b62ae017859157b3d76681a291619f8f0f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737988670329307218,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-frtmc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6502e9e7-e803-4a71-a2e8-b4d25b78f0e6,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:479a2cfcd7cdcb7fad6c90d77e3a78973366118e0adc2dc99e9d8e7ed9aba774,PodSandboxId:10c9ea3327022774db5752c8b0ef734700dc6d6cdb644b106d2c877d11ae11fc,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987395968593737,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s92rm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: a42125a5-7f23-4491-a1aa-55656a4294d6,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b7aa936c0cc0c9f23ddf231819b60b04d804dcab7770f754d6e9e80249ce8,PodSandboxId:0714025063b865a6c3a44df67ca6a266d55215ae2a7d0a8d75f344721e07facb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389185326110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kc8bv,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7817b17a-8213-42da-8957-5d97c8df5059,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24fc2cf06e2f8467fe448f42f3ceaf3537880e15c5c9bb2878a4834bb78b79a,PodSandboxId:7e6537c678af72b7c52bcb865f5f1e484a5f23fe7a6fb6e7452ff2aedba17b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c1
3f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389118251315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hmkdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4283df2-0988-4342-9de1-896ac5a40d86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e428099be9f4a49f9a0e5a8bcba8cf32bd082326832da61bd203115c39bd79,PodSandboxId:b97595149bbf34cff1fbb0a809e0355ba577d977e64bfe0374d935bcaef5d1cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,
},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987389010088254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b842b41e-ceeb-4132-bf70-2443e4c27ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ed8cc931135411e5c74b9ca21a06ad6b696e590ce72dfae372b744b62a1750,PodSandboxId:e3d8c2c589f35487f3a8755191fcb608e2e40281a12f47f69282936d8dcc2aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987388141316400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvbtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e2d7ac-5dd4-45f1-957d-9189f9d6a607,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c17ad02c6b15ba57a3a0b77c5c0582cc81e2ec5f555e9470622fa25e42bd92,PodSandboxId:ede47c4ca4048693c2aad5d91abdcd47a6e0378b7c7e9db65a8c39ddb48a5789,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737987377188530909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d77331e7f8387607d5dedf89c3f86f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c4052d759b7a63010f97d37c7961e3cffaabe08ee1ec787d91c51d56154daf,PodSandboxId:92f9ae5c2ab3713b7f21156e22bedee39d4170f46569dcc92dcfe1e466d80d07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987377114470254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b906771aab30792b3a19fdfdb346306,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f65f7a1e4f15dec4c3466b8a477d659b39d5a1db12bf497b7022aee8b15d0da,PodSandboxId:243559db57171e96ce401a41a47921c6b7694d727f1cc2a29fce80ed2511cb93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737987377154422515,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2842e5de646cebae6707739ec868992c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb869ec5b25b20bca73dbc599efa2b584e4ba18dd19d065845f15c6a02f434d,PodSandboxId:da65325e3e4cc9d30d0f6b3c7d133a8903b9977ef4b94f46b5bac69a7b25ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987377105925665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076907ea17b729b442b83ec0e75e2cc06e4a8200063356baf59df8f0e260e273,PodSandboxId:da8669732c84e64dca8582539936e09cab03c5b9941862635999fcc398f494bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987089711356526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8036b438-638a-4b8f-b6a2-fb94edf5d334 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.060897304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3be781eb-59e0-42c6-a7c7-e9d23546ff2b name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.060967386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3be781eb-59e0-42c6-a7c7-e9d23546ff2b name=/runtime.v1.RuntimeService/Version
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.062413156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49d310bb-dd70-4262-9008-5d73e328bc02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.063044159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988705063024231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49d310bb-dd70-4262-9008-5d73e328bc02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.063559661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bf43ac0-e322-4dfa-83bd-53daed872273 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.063606170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bf43ac0-e322-4dfa-83bd-53daed872273 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:38:25 embed-certs-742142 crio[724]: time="2025-01-27 14:38:25.063900505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7,PodSandboxId:532c76fe5261dc3841192ba717e090b62ae017859157b3d76681a291619f8f0f,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737988670329307218,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-frtmc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6502e9e7-e803-4a71-a2e8-b4d25b78f0e6,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:479a2cfcd7cdcb7fad6c90d77e3a78973366118e0adc2dc99e9d8e7ed9aba774,PodSandboxId:10c9ea3327022774db5752c8b0ef734700dc6d6cdb644b106d2c877d11ae11fc,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987395968593737,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-s92rm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: a42125a5-7f23-4491-a1aa-55656a4294d6,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588b7aa936c0cc0c9f23ddf231819b60b04d804dcab7770f754d6e9e80249ce8,PodSandboxId:0714025063b865a6c3a44df67ca6a266d55215ae2a7d0a8d75f344721e07facb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389185326110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kc8bv,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7817b17a-8213-42da-8957-5d97c8df5059,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c24fc2cf06e2f8467fe448f42f3ceaf3537880e15c5c9bb2878a4834bb78b79a,PodSandboxId:7e6537c678af72b7c52bcb865f5f1e484a5f23fe7a6fb6e7452ff2aedba17b74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c1
3f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987389118251315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hmkdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4283df2-0988-4342-9de1-896ac5a40d86,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e428099be9f4a49f9a0e5a8bcba8cf32bd082326832da61bd203115c39bd79,PodSandboxId:b97595149bbf34cff1fbb0a809e0355ba577d977e64bfe0374d935bcaef5d1cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,
},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987389010088254,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b842b41e-ceeb-4132-bf70-2443e4c27ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ed8cc931135411e5c74b9ca21a06ad6b696e590ce72dfae372b744b62a1750,PodSandboxId:e3d8c2c589f35487f3a8755191fcb608e2e40281a12f47f69282936d8dcc2aef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987388141316400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvbtr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e2d7ac-5dd4-45f1-957d-9189f9d6a607,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82c17ad02c6b15ba57a3a0b77c5c0582cc81e2ec5f555e9470622fa25e42bd92,PodSandboxId:ede47c4ca4048693c2aad5d91abdcd47a6e0378b7c7e9db65a8c39ddb48a5789,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737987377188530909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4d77331e7f8387607d5dedf89c3f86f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c4052d759b7a63010f97d37c7961e3cffaabe08ee1ec787d91c51d56154daf,PodSandboxId:92f9ae5c2ab3713b7f21156e22bedee39d4170f46569dcc92dcfe1e466d80d07,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737987377114470254,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b906771aab30792b3a19fdfdb346306,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f65f7a1e4f15dec4c3466b8a477d659b39d5a1db12bf497b7022aee8b15d0da,PodSandboxId:243559db57171e96ce401a41a47921c6b7694d727f1cc2a29fce80ed2511cb93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737987377154422515,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2842e5de646cebae6707739ec868992c,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:feb869ec5b25b20bca73dbc599efa2b584e4ba18dd19d065845f15c6a02f434d,PodSandboxId:da65325e3e4cc9d30d0f6b3c7d133a8903b9977ef4b94f46b5bac69a7b25ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987377105925665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:076907ea17b729b442b83ec0e75e2cc06e4a8200063356baf59df8f0e260e273,PodSandboxId:da8669732c84e64dca8582539936e09cab03c5b9941862635999fcc398f494bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737987089711356526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-742142,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c18d384825ef6e3faf1e920cc9072764,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bf43ac0-e322-4dfa-83bd-53daed872273 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	96145da82a414       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago      Exited              dashboard-metrics-scraper   9                   532c76fe5261d       dashboard-metrics-scraper-86c6bf9756-frtmc
	479a2cfcd7cdc       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   10c9ea3327022       kubernetes-dashboard-7779f9b69b-s92rm
	588b7aa936c0c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   0714025063b86       coredns-668d6bf9bc-kc8bv
	c24fc2cf06e2f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   7e6537c678af7       coredns-668d6bf9bc-hmkdd
	02e428099be9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   b97595149bbf3       storage-provisioner
	28ed8cc931135       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   e3d8c2c589f35       kube-proxy-lvbtr
	82c17ad02c6b1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           22 minutes ago      Running             etcd                        2                   ede47c4ca4048       etcd-embed-certs-742142
	7f65f7a1e4f15       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           22 minutes ago      Running             kube-controller-manager     2                   243559db57171       kube-controller-manager-embed-certs-742142
	76c4052d759b7       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           22 minutes ago      Running             kube-scheduler              2                   92f9ae5c2ab37       kube-scheduler-embed-certs-742142
	feb869ec5b25b       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           22 minutes ago      Running             kube-apiserver              2                   da65325e3e4cc       kube-apiserver-embed-certs-742142
	076907ea17b72       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   da8669732c84e       kube-apiserver-embed-certs-742142
	
	
	==> coredns [588b7aa936c0cc0c9f23ddf231819b60b04d804dcab7770f754d6e9e80249ce8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c24fc2cf06e2f8467fe448f42f3ceaf3537880e15c5c9bb2878a4834bb78b79a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-742142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-742142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d
	                    minikube.k8s.io/name=embed-certs-742142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_16_23_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:16:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-742142
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:38:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:35:56 +0000   Mon, 27 Jan 2025 14:16:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:35:56 +0000   Mon, 27 Jan 2025 14:16:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:35:56 +0000   Mon, 27 Jan 2025 14:16:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:35:56 +0000   Mon, 27 Jan 2025 14:16:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.87
	  Hostname:    embed-certs-742142
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c7adbdc146744e382c043723573dc73
	  System UUID:                5c7adbdc-1467-44e3-82c0-43723573dc73
	  Boot ID:                    909aff81-c4d6-4018-879c-e20688c57d1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-hmkdd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-kc8bv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-742142                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-embed-certs-742142             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-embed-certs-742142    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-lvbtr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-742142             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-f79f97bbb-kclqf                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-frtmc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-s92rm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-742142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-742142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-742142 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-742142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-742142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-742142 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-742142 event: Registered Node embed-certs-742142 in Controller
	
	
	==> dmesg <==
	[  +4.922451] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.776301] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634625] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.473015] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.058223] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056091] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.200585] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.135109] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.289226] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.161495] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +2.353669] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.059261] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.561612] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.994956] kauditd_printk_skb: 85 callbacks suppressed
	[Jan27 14:16] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.819084] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +4.560711] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.501460] systemd-fstab-generator[3027]: Ignoring "noauto" option for root device
	[  +4.903394] systemd-fstab-generator[3137]: Ignoring "noauto" option for root device
	[  +0.111157] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.956517] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.247152] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.150373] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [82c17ad02c6b15ba57a3a0b77c5c0582cc81e2ec5f555e9470622fa25e42bd92] <==
	{"level":"warn","ts":"2025-01-27T14:29:45.945940Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.789848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:29:45.947064Z","caller":"traceutil/trace.go:171","msg":"trace[909044452] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"363.952545ms","start":"2025-01-27T14:29:45.583101Z","end":"2025-01-27T14:29:45.947054Z","steps":["trace[909044452] 'agreement among raft nodes before linearized reading'  (duration: 362.80094ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:29:45.947114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:29:45.583085Z","time spent":"364.015785ms","remote":"127.0.0.1:46246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:29:46.980404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.960778ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2582414886135543642 > lease_revoke:<id:23d694a81eca36c8>","response":"size:28"}
	{"level":"info","ts":"2025-01-27T14:30:30.288136Z","caller":"traceutil/trace.go:171","msg":"trace[771869459] linearizableReadLoop","detail":"{readStateIndex:1527; appliedIndex:1526; }","duration":"108.479569ms","start":"2025-01-27T14:30:30.179630Z","end":"2025-01-27T14:30:30.288110Z","steps":["trace[771869459] 'read index received'  (duration: 108.311261ms)","trace[771869459] 'applied index is now lower than readState.Index'  (duration: 167.88µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:30:30.288348Z","caller":"traceutil/trace.go:171","msg":"trace[1394940461] transaction","detail":"{read_only:false; response_revision:1341; number_of_response:1; }","duration":"113.786183ms","start":"2025-01-27T14:30:30.174550Z","end":"2025-01-27T14:30:30.288337Z","steps":["trace[1394940461] 'process raft request'  (duration: 113.434516ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:30:30.288553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.842184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:30:30.288600Z","caller":"traceutil/trace.go:171","msg":"trace[1299673236] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1341; }","duration":"108.987672ms","start":"2025-01-27T14:30:30.179605Z","end":"2025-01-27T14:30:30.288593Z","steps":["trace[1299673236] 'agreement among raft nodes before linearized reading'  (duration: 108.843327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:30:30.544663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.139855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:30:30.545222Z","caller":"traceutil/trace.go:171","msg":"trace[1379309278] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1341; }","duration":"164.727911ms","start":"2025-01-27T14:30:30.380474Z","end":"2025-01-27T14:30:30.545202Z","steps":["trace[1379309278] 'range keys from in-memory index tree'  (duration: 164.091536ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:30:48.313295Z","caller":"traceutil/trace.go:171","msg":"trace[75918350] linearizableReadLoop","detail":"{readStateIndex:1545; appliedIndex:1544; }","duration":"134.339428ms","start":"2025-01-27T14:30:48.178939Z","end":"2025-01-27T14:30:48.313278Z","steps":["trace[75918350] 'read index received'  (duration: 134.192993ms)","trace[75918350] 'applied index is now lower than readState.Index'  (duration: 145.988µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:30:48.313518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.560035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:30:48.313562Z","caller":"traceutil/trace.go:171","msg":"trace[894601470] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1355; }","duration":"134.647272ms","start":"2025-01-27T14:30:48.178908Z","end":"2025-01-27T14:30:48.313555Z","steps":["trace[894601470] 'agreement among raft nodes before linearized reading'  (duration: 134.549612ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:30:48.313811Z","caller":"traceutil/trace.go:171","msg":"trace[1125765365] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"269.974128ms","start":"2025-01-27T14:30:48.043817Z","end":"2025-01-27T14:30:48.313791Z","steps":["trace[1125765365] 'process raft request'  (duration: 269.348561ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:30:58.758056Z","caller":"traceutil/trace.go:171","msg":"trace[436691228] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"320.239419ms","start":"2025-01-27T14:30:58.437770Z","end":"2025-01-27T14:30:58.758009Z","steps":["trace[436691228] 'process raft request'  (duration: 320.0988ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:30:58.758216Z","caller":"traceutil/trace.go:171","msg":"trace[1359242055] linearizableReadLoop","detail":"{readStateIndex:1556; appliedIndex:1556; }","duration":"179.470861ms","start":"2025-01-27T14:30:58.578731Z","end":"2025-01-27T14:30:58.758202Z","steps":["trace[1359242055] 'read index received'  (duration: 179.465671ms)","trace[1359242055] 'applied index is now lower than readState.Index'  (duration: 4.178µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:30:58.758232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:30:58.437757Z","time spent":"320.371492ms","remote":"127.0.0.1:46242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1362 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T14:30:58.758425Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.686972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:30:58.758503Z","caller":"traceutil/trace.go:171","msg":"trace[1937905002] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1364; }","duration":"179.785614ms","start":"2025-01-27T14:30:58.578708Z","end":"2025-01-27T14:30:58.758493Z","steps":["trace[1937905002] 'agreement among raft nodes before linearized reading'  (duration: 179.679128ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:31:18.718447Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1129}
	{"level":"info","ts":"2025-01-27T14:31:18.722587Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1129,"took":"3.822877ms","hash":4178267464,"current-db-size-bytes":2789376,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T14:31:18.722635Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4178267464,"revision":1129,"compact-revision":879}
	{"level":"info","ts":"2025-01-27T14:36:18.725251Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1380}
	{"level":"info","ts":"2025-01-27T14:36:18.729650Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1380,"took":"3.906968ms","hash":3924095647,"current-db-size-bytes":2789376,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1769472,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T14:36:18.729699Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3924095647,"revision":1380,"compact-revision":1129}
	
	
	==> kernel <==
	 14:38:25 up 27 min,  0 users,  load average: 0.05, 0.14, 0.17
	Linux embed-certs-742142 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [076907ea17b729b442b83ec0e75e2cc06e4a8200063356baf59df8f0e260e273] <==
	W0127 14:16:09.574524       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.579059       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.598473       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.611060       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.622532       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.627162       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.707212       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.733190       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.824915       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.830207       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.831492       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.900694       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.965159       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.994666       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:09.994746       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.055183       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.163291       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.206105       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.256761       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.317053       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.379987       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.407094       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.747316       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:10.911639       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 14:16:13.837354       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [feb869ec5b25b20bca73dbc599efa2b584e4ba18dd19d065845f15c6a02f434d] <==
	 > logger="UnhandledError"
	I0127 14:34:20.929724       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:36:19.925069       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:36:19.925463       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 14:36:20.927804       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 14:36:20.927888       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:36:20.928012       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 14:36:20.928060       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:36:20.929345       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:36:20.929396       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 14:37:20.930024       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:37:20.930119       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 14:37:20.930047       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:37:20.930308       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 14:37:20.931355       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 14:37:20.931418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7f65f7a1e4f15dec4c3466b8a477d659b39d5a1db12bf497b7022aee8b15d0da] <==
	E0127 14:33:26.684215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:33:26.801197       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:33:56.689911       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:33:56.807593       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:34:26.696458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:34:26.816996       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:34:56.703062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:34:56.824003       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:35:26.709465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:26.831153       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:35:56.115171       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-742142"
	E0127 14:35:56.715668       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:35:56.837971       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:26.722470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:26.848146       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:36:56.728962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:36:56.856485       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 14:37:26.735340       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:26.863558       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 14:37:40.333601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="368.704µs"
	I0127 14:37:50.759246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="72.382µs"
	I0127 14:37:53.441236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="42.84µs"
	I0127 14:37:55.327911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="154.016µs"
	E0127 14:37:56.741139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 14:37:56.870942       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28ed8cc931135411e5c74b9ca21a06ad6b696e590ce72dfae372b744b62a1750] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:16:28.833937       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:16:28.850122       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.87"]
	E0127 14:16:28.850264       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:16:29.088655       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:16:29.088709       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:16:29.088749       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:16:29.095290       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:16:29.095623       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:16:29.095692       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:16:29.100255       1 config.go:199] "Starting service config controller"
	I0127 14:16:29.100369       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:16:29.100398       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:16:29.100403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:16:29.101360       1 config.go:329] "Starting node config controller"
	I0127 14:16:29.101368       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:16:29.200934       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:16:29.200939       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:16:29.201633       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76c4052d759b7a63010f97d37c7961e3cffaabe08ee1ec787d91c51d56154daf] <==
	W0127 14:16:19.963734       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:16:19.963817       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:19.963932       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:16:19.963998       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:19.964073       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:19.965911       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:19.966124       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:19.966225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:19.970045       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:19.970094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:20.827540       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:16:20.827610       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 14:16:20.865150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:20.865197       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:20.952522       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:20.952571       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:21.008074       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:16:21.008124       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:21.017213       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 14:16:21.017257       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:21.027670       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:16:21.027724       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:16:21.068690       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 14:16:21.068734       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 14:16:23.514991       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:37:50 embed-certs-742142 kubelet[3034]: I0127 14:37:50.733641    3034 scope.go:117] "RemoveContainer" containerID="388517c0d7a14468303be8823e06e4c7d1da8c9d1463e3d71326bb38320dc049"
	Jan 27 14:37:50 embed-certs-742142 kubelet[3034]: I0127 14:37:50.733993    3034 scope.go:117] "RemoveContainer" containerID="96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7"
	Jan 27 14:37:50 embed-certs-742142 kubelet[3034]: E0127 14:37:50.734144    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-frtmc_kubernetes-dashboard(6502e9e7-e803-4a71-a2e8-b4d25b78f0e6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-frtmc" podUID="6502e9e7-e803-4a71-a2e8-b4d25b78f0e6"
	Jan 27 14:37:52 embed-certs-742142 kubelet[3034]: E0127 14:37:52.645293    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988672645112794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:37:52 embed-certs-742142 kubelet[3034]: E0127 14:37:52.645341    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988672645112794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:37:53 embed-certs-742142 kubelet[3034]: I0127 14:37:53.428495    3034 scope.go:117] "RemoveContainer" containerID="96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7"
	Jan 27 14:37:53 embed-certs-742142 kubelet[3034]: E0127 14:37:53.428970    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-frtmc_kubernetes-dashboard(6502e9e7-e803-4a71-a2e8-b4d25b78f0e6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-frtmc" podUID="6502e9e7-e803-4a71-a2e8-b4d25b78f0e6"
	Jan 27 14:37:55 embed-certs-742142 kubelet[3034]: E0127 14:37:55.316253    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kclqf" podUID="9539cc67-38fb-45e9-9884-c251c427b7d3"
	Jan 27 14:38:02 embed-certs-742142 kubelet[3034]: E0127 14:38:02.647119    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988682646515927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:38:02 embed-certs-742142 kubelet[3034]: E0127 14:38:02.647466    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988682646515927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:38:08 embed-certs-742142 kubelet[3034]: E0127 14:38:08.316276    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kclqf" podUID="9539cc67-38fb-45e9-9884-c251c427b7d3"
	Jan 27 14:38:08 embed-certs-742142 kubelet[3034]: I0127 14:38:08.317285    3034 scope.go:117] "RemoveContainer" containerID="96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7"
	Jan 27 14:38:08 embed-certs-742142 kubelet[3034]: E0127 14:38:08.317501    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-frtmc_kubernetes-dashboard(6502e9e7-e803-4a71-a2e8-b4d25b78f0e6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-frtmc" podUID="6502e9e7-e803-4a71-a2e8-b4d25b78f0e6"
	Jan 27 14:38:12 embed-certs-742142 kubelet[3034]: E0127 14:38:12.648939    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988692648647022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:38:12 embed-certs-742142 kubelet[3034]: E0127 14:38:12.648988    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988692648647022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:38:19 embed-certs-742142 kubelet[3034]: I0127 14:38:19.316280    3034 scope.go:117] "RemoveContainer" containerID="96145da82a4143e5d65c41560620f08bf85190e6d9b45aea563fd1f01a0a99d7"
	Jan 27 14:38:19 embed-certs-742142 kubelet[3034]: E0127 14:38:19.316452    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-frtmc_kubernetes-dashboard(6502e9e7-e803-4a71-a2e8-b4d25b78f0e6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-frtmc" podUID="6502e9e7-e803-4a71-a2e8-b4d25b78f0e6"
	Jan 27 14:38:19 embed-certs-742142 kubelet[3034]: E0127 14:38:19.316631    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kclqf" podUID="9539cc67-38fb-45e9-9884-c251c427b7d3"
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]: E0127 14:38:22.356028    3034 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]: E0127 14:38:22.650868    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988702650541369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:38:22 embed-certs-742142 kubelet[3034]: E0127 14:38:22.650915    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988702650541369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [479a2cfcd7cdcb7fad6c90d77e3a78973366118e0adc2dc99e9d8e7ed9aba774] <==
	2025/01/27 14:26:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:26:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:27:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:27:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:28:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:28:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:29:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:30:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:31:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:32:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:33:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:34:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:35:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:36:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:37:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:38:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02e428099be9f4a49f9a0e5a8bcba8cf32bd082326832da61bd203115c39bd79] <==
	I0127 14:16:29.341191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:16:29.379429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:16:29.391466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:16:29.484177       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:16:29.492387       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-742142_3a666fa2-12b5-44e7-baba-069179ce782a!
	I0127 14:16:29.646186       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"523361d7-387a-4000-94e6-640cafd35ac8", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-742142_3a666fa2-12b5-44e7-baba-069179ce782a became leader
	I0127 14:16:29.854982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-742142_3a666fa2-12b5-44e7-baba-069179ce782a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742142 -n embed-certs-742142
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-742142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-kclqf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-742142 describe pod metrics-server-f79f97bbb-kclqf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-742142 describe pod metrics-server-f79f97bbb-kclqf: exit status 1 (60.48925ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-kclqf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-742142 describe pod metrics-server-f79f97bbb-kclqf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1645.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-456130 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-456130 create -f testdata/busybox.yaml: exit status 1 (56.282998ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-456130" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-456130 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 6 (232.783243ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:11:43.504642  604180 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-456130" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-456130" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 6 (243.903874ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:11:43.747270  604210 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-456130" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-456130" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-456130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-456130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m24.646483957s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-456130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-456130 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-456130 describe deploy/metrics-server -n kube-system: exit status 1 (47.46397ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-456130" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-456130 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 6 (244.772832ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 14:13:08.687551  604688 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-456130" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-456130" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (506.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 14:13:28.673199  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:15:34.435078  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m24.692457664s)

                                                
                                                
-- stdout --
	* [old-k8s-version-456130] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-456130" primary control-plane node in "old-k8s-version-456130" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-456130" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:13:10.290997  604817 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:13:10.291110  604817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:10.291119  604817 out.go:358] Setting ErrFile to fd 2...
	I0127 14:13:10.291125  604817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:13:10.291403  604817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:13:10.292036  604817 out.go:352] Setting JSON to false
	I0127 14:13:10.293298  604817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17735,"bootTime":1737969455,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:13:10.293429  604817 start.go:139] virtualization: kvm guest
	I0127 14:13:10.295304  604817 out.go:177] * [old-k8s-version-456130] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:13:10.296475  604817 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:13:10.296511  604817 notify.go:220] Checking for updates...
	I0127 14:13:10.298697  604817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:13:10.299768  604817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:13:10.300881  604817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:13:10.302038  604817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:13:10.303137  604817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:13:10.304725  604817 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:13:10.305259  604817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:13:10.305313  604817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:10.321624  604817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35163
	I0127 14:13:10.322129  604817 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:10.322708  604817 main.go:141] libmachine: Using API Version  1
	I0127 14:13:10.322737  604817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:10.323151  604817 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:10.323333  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:10.324895  604817 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 14:13:10.326032  604817 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:13:10.326452  604817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:13:10.326496  604817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:10.341647  604817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40763
	I0127 14:13:10.342052  604817 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:10.342562  604817 main.go:141] libmachine: Using API Version  1
	I0127 14:13:10.342585  604817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:10.342946  604817 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:10.343145  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:10.379509  604817 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:13:10.380719  604817 start.go:297] selected driver: kvm2
	I0127 14:13:10.380739  604817 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
56130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:10.380872  604817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:13:10.381574  604817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:10.381727  604817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:13:10.397032  604817 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:13:10.397431  604817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:13:10.397470  604817 cni.go:84] Creating CNI manager for ""
	I0127 14:13:10.397536  604817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:13:10.397617  604817 start.go:340] cluster config:
	{Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:10.397747  604817 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:13:10.399223  604817 out.go:177] * Starting "old-k8s-version-456130" primary control-plane node in "old-k8s-version-456130" cluster
	I0127 14:13:10.400339  604817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:13:10.400380  604817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 14:13:10.400392  604817 cache.go:56] Caching tarball of preloaded images
	I0127 14:13:10.400482  604817 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:13:10.400493  604817 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 14:13:10.400614  604817 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:13:10.400800  604817 start.go:360] acquireMachinesLock for old-k8s-version-456130: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:13:10.400842  604817 start.go:364] duration metric: took 24.638µs to acquireMachinesLock for "old-k8s-version-456130"
	I0127 14:13:10.400857  604817 start.go:96] Skipping create...Using existing machine configuration
	I0127 14:13:10.400865  604817 fix.go:54] fixHost starting: 
	I0127 14:13:10.401160  604817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:13:10.401198  604817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:13:10.416471  604817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0127 14:13:10.416976  604817 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:13:10.417539  604817 main.go:141] libmachine: Using API Version  1
	I0127 14:13:10.417562  604817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:13:10.417908  604817 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:13:10.418106  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:10.418252  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetState
	I0127 14:13:10.419817  604817 fix.go:112] recreateIfNeeded on old-k8s-version-456130: state=Stopped err=<nil>
	I0127 14:13:10.419839  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	W0127 14:13:10.420005  604817 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 14:13:10.421445  604817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-456130" ...
	I0127 14:13:10.422420  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .Start
	I0127 14:13:10.422579  604817 main.go:141] libmachine: (old-k8s-version-456130) starting domain...
	I0127 14:13:10.422605  604817 main.go:141] libmachine: (old-k8s-version-456130) ensuring networks are active...
	I0127 14:13:10.423319  604817 main.go:141] libmachine: (old-k8s-version-456130) Ensuring network default is active
	I0127 14:13:10.423683  604817 main.go:141] libmachine: (old-k8s-version-456130) Ensuring network mk-old-k8s-version-456130 is active
	I0127 14:13:10.424027  604817 main.go:141] libmachine: (old-k8s-version-456130) getting domain XML...
	I0127 14:13:10.424808  604817 main.go:141] libmachine: (old-k8s-version-456130) creating domain...
	I0127 14:13:10.814314  604817 main.go:141] libmachine: (old-k8s-version-456130) waiting for IP...
	I0127 14:13:10.815489  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:10.815975  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:10.816141  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:10.816003  604852 retry.go:31] will retry after 188.022713ms: waiting for domain to come up
	I0127 14:13:11.005598  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:11.006220  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:11.006253  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:11.006178  604852 retry.go:31] will retry after 234.669948ms: waiting for domain to come up
	I0127 14:13:11.242670  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:11.243208  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:11.243242  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:11.243151  604852 retry.go:31] will retry after 475.214011ms: waiting for domain to come up
	I0127 14:13:11.719687  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:11.720295  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:11.720328  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:11.720244  604852 retry.go:31] will retry after 516.970122ms: waiting for domain to come up
	I0127 14:13:12.238565  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:12.239098  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:12.239150  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:12.239089  604852 retry.go:31] will retry after 576.743833ms: waiting for domain to come up
	I0127 14:13:12.818107  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:12.818910  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:12.818942  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:12.818843  604852 retry.go:31] will retry after 590.058875ms: waiting for domain to come up
	I0127 14:13:13.410546  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:13.411040  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:13.411068  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:13.411009  604852 retry.go:31] will retry after 1.081149538s: waiting for domain to come up
	I0127 14:13:14.494062  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:14.494627  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:14.494666  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:14.494600  604852 retry.go:31] will retry after 1.317977339s: waiting for domain to come up
	I0127 14:13:15.814795  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:15.815451  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:15.815476  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:15.815417  604852 retry.go:31] will retry after 1.343482945s: waiting for domain to come up
	I0127 14:13:17.160958  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:17.161466  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:17.161493  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:17.161431  604852 retry.go:31] will retry after 2.273049849s: waiting for domain to come up
	I0127 14:13:19.436669  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:19.437147  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:19.437178  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:19.437114  604852 retry.go:31] will retry after 2.060381528s: waiting for domain to come up
	I0127 14:13:21.498780  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:21.499333  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:21.499370  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:21.499259  604852 retry.go:31] will retry after 2.225426403s: waiting for domain to come up
	I0127 14:13:23.726924  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:23.727618  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | unable to find current IP address of domain old-k8s-version-456130 in network mk-old-k8s-version-456130
	I0127 14:13:23.727645  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | I0127 14:13:23.727505  604852 retry.go:31] will retry after 3.181650066s: waiting for domain to come up
	I0127 14:13:26.910533  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:26.911079  604817 main.go:141] libmachine: (old-k8s-version-456130) found domain IP: 192.168.39.11
	I0127 14:13:26.911110  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has current primary IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:26.911119  604817 main.go:141] libmachine: (old-k8s-version-456130) reserving static IP address...
	I0127 14:13:26.911539  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "old-k8s-version-456130", mac: "52:54:00:7a:98:59", ip: "192.168.39.11"} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:26.911566  604817 main.go:141] libmachine: (old-k8s-version-456130) reserved static IP address 192.168.39.11 for domain old-k8s-version-456130
	I0127 14:13:26.911588  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | skip adding static IP to network mk-old-k8s-version-456130 - found existing host DHCP lease matching {name: "old-k8s-version-456130", mac: "52:54:00:7a:98:59", ip: "192.168.39.11"}
	I0127 14:13:26.911606  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | Getting to WaitForSSH function...
	I0127 14:13:26.911624  604817 main.go:141] libmachine: (old-k8s-version-456130) waiting for SSH...
	I0127 14:13:26.913725  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:26.914101  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:26.914137  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:26.914289  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | Using SSH client type: external
	I0127 14:13:26.914330  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa (-rw-------)
	I0127 14:13:26.914368  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:13:26.914378  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | About to run SSH command:
	I0127 14:13:26.914387  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | exit 0
	I0127 14:13:27.033436  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | SSH cmd err, output: <nil>: 
	I0127 14:13:27.033887  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetConfigRaw
	I0127 14:13:27.034625  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:13:27.037328  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.037750  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.037800  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.037992  604817 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/config.json ...
	I0127 14:13:27.038210  604817 machine.go:93] provisionDockerMachine start ...
	I0127 14:13:27.038233  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:27.038472  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.040534  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.040825  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.040874  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.040962  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:27.041114  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.041285  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.041410  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:27.041552  604817 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:27.041756  604817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:13:27.041769  604817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 14:13:27.137529  604817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 14:13:27.137561  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:13:27.137802  604817 buildroot.go:166] provisioning hostname "old-k8s-version-456130"
	I0127 14:13:27.137824  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:13:27.137986  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.140484  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.140832  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.140871  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.140980  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:27.141180  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.141354  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.141487  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:27.141650  604817 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:27.141852  604817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:13:27.141873  604817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-456130 && echo "old-k8s-version-456130" | sudo tee /etc/hostname
	I0127 14:13:27.253434  604817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-456130
	
	I0127 14:13:27.253462  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.256174  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.256549  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.256580  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.256707  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:27.256904  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.257041  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.257193  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:27.257375  604817 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:27.257597  604817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:13:27.257624  604817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-456130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-456130/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-456130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:13:27.372546  604817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:13:27.372580  604817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:13:27.372638  604817 buildroot.go:174] setting up certificates
	I0127 14:13:27.372649  604817 provision.go:84] configureAuth start
	I0127 14:13:27.372663  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetMachineName
	I0127 14:13:27.372952  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:13:27.375930  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.376317  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.376341  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.376563  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.379305  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.379617  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.379640  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.379797  604817 provision.go:143] copyHostCerts
	I0127 14:13:27.379851  604817 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:13:27.379860  604817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:13:27.379929  604817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:13:27.380030  604817 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:13:27.380039  604817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:13:27.380072  604817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:13:27.380165  604817 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:13:27.380184  604817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:13:27.380216  604817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:13:27.380272  604817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-456130 san=[127.0.0.1 192.168.39.11 localhost minikube old-k8s-version-456130]
	I0127 14:13:27.642035  604817 provision.go:177] copyRemoteCerts
	I0127 14:13:27.642110  604817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:13:27.642141  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.644815  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.645207  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.645244  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.645484  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:27.645713  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.645904  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:27.646053  604817 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:13:27.731403  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 14:13:27.759547  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:13:27.786214  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:13:27.812832  604817 provision.go:87] duration metric: took 440.173307ms to configureAuth
	I0127 14:13:27.812853  604817 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:13:27.813053  604817 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:13:27.813153  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:27.816293  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.816692  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:27.816718  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:27.816929  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:27.817139  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.817294  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:27.817466  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:27.817659  604817 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:27.817862  604817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:13:27.817888  604817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:13:28.052531  604817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:13:28.052564  604817 machine.go:96] duration metric: took 1.014338329s to provisionDockerMachine
	I0127 14:13:28.052579  604817 start.go:293] postStartSetup for "old-k8s-version-456130" (driver="kvm2")
	I0127 14:13:28.052591  604817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:13:28.052633  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:28.053039  604817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:13:28.053077  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:28.055779  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.056166  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:28.056197  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.056384  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:28.056592  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:28.056852  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:28.057001  604817 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:13:28.142805  604817 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:13:28.147678  604817 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:13:28.147704  604817 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:13:28.147786  604817 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:13:28.147907  604817 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:13:28.148049  604817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:13:28.157433  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:13:28.188339  604817 start.go:296] duration metric: took 135.745528ms for postStartSetup
	I0127 14:13:28.188392  604817 fix.go:56] duration metric: took 17.787526073s for fixHost
	I0127 14:13:28.188416  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:28.191509  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.191886  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:28.191921  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.192165  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:28.192347  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:28.192521  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:28.192669  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:28.192829  604817 main.go:141] libmachine: Using SSH client type: native
	I0127 14:13:28.193039  604817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0127 14:13:28.193053  604817 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:13:28.294328  604817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987208.257671877
	
	I0127 14:13:28.294356  604817 fix.go:216] guest clock: 1737987208.257671877
	I0127 14:13:28.294366  604817 fix.go:229] Guest: 2025-01-27 14:13:28.257671877 +0000 UTC Remote: 2025-01-27 14:13:28.188397012 +0000 UTC m=+17.943828457 (delta=69.274865ms)
	I0127 14:13:28.294415  604817 fix.go:200] guest clock delta is within tolerance: 69.274865ms
	I0127 14:13:28.294425  604817 start.go:83] releasing machines lock for "old-k8s-version-456130", held for 17.893572315s
	I0127 14:13:28.294451  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:28.294688  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:13:28.297329  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.297713  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:28.297741  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.297949  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:28.298483  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:28.298675  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .DriverName
	I0127 14:13:28.298770  604817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:13:28.298817  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:28.298944  604817 ssh_runner.go:195] Run: cat /version.json
	I0127 14:13:28.298991  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHHostname
	I0127 14:13:28.301874  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.302184  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.302273  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:28.302304  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.302431  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:28.302614  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:28.302640  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:28.302701  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:28.302837  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHPort
	I0127 14:13:28.302940  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:28.302999  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHKeyPath
	I0127 14:13:28.303060  604817 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:13:28.303184  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetSSHUsername
	I0127 14:13:28.303362  604817 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/old-k8s-version-456130/id_rsa Username:docker}
	I0127 14:13:28.406450  604817 ssh_runner.go:195] Run: systemctl --version
	I0127 14:13:28.412211  604817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:13:28.563471  604817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:13:28.570231  604817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:13:28.570312  604817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:13:28.588825  604817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:13:28.588853  604817 start.go:495] detecting cgroup driver to use...
	I0127 14:13:28.588942  604817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:13:28.606835  604817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:13:28.620043  604817 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:13:28.620098  604817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:13:28.638171  604817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:13:28.652734  604817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:13:28.765984  604817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:13:28.931927  604817 docker.go:233] disabling docker service ...
	I0127 14:13:28.931995  604817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:13:28.946264  604817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:13:28.959016  604817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:13:29.074583  604817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:13:29.196253  604817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:13:29.210284  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:13:29.229643  604817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 14:13:29.229716  604817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:13:29.240671  604817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:13:29.240745  604817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:13:29.252068  604817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:13:29.262785  604817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:13:29.273120  604817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:13:29.284067  604817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:13:29.293797  604817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:13:29.293838  604817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:13:29.306961  604817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:13:29.318062  604817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:13:29.452230  604817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:13:29.544011  604817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:13:29.544102  604817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:13:29.548914  604817 start.go:563] Will wait 60s for crictl version
	I0127 14:13:29.548968  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:29.552914  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:13:29.595486  604817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:13:29.595569  604817 ssh_runner.go:195] Run: crio --version
	I0127 14:13:29.624406  604817 ssh_runner.go:195] Run: crio --version
	I0127 14:13:29.655589  604817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 14:13:29.656744  604817 main.go:141] libmachine: (old-k8s-version-456130) Calling .GetIP
	I0127 14:13:29.659482  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:29.659863  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:59", ip: ""} in network mk-old-k8s-version-456130: {Iface:virbr2 ExpiryTime:2025-01-27 15:13:21 +0000 UTC Type:0 Mac:52:54:00:7a:98:59 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:old-k8s-version-456130 Clientid:01:52:54:00:7a:98:59}
	I0127 14:13:29.659893  604817 main.go:141] libmachine: (old-k8s-version-456130) DBG | domain old-k8s-version-456130 has defined IP address 192.168.39.11 and MAC address 52:54:00:7a:98:59 in network mk-old-k8s-version-456130
	I0127 14:13:29.660183  604817 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:13:29.664310  604817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:13:29.677035  604817 kubeadm.go:883] updating cluster {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:13:29.677203  604817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:13:29.677260  604817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:13:29.731063  604817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:13:29.731135  604817 ssh_runner.go:195] Run: which lz4
	I0127 14:13:29.735551  604817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:13:29.739864  604817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:13:29.739893  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 14:13:31.627946  604817 crio.go:462] duration metric: took 1.892431972s to copy over tarball
	I0127 14:13:31.628057  604817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:13:34.703272  604817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.07517884s)
	I0127 14:13:34.703310  604817 crio.go:469] duration metric: took 3.075313847s to extract the tarball
	I0127 14:13:34.703321  604817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:13:34.747626  604817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:13:34.784416  604817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 14:13:34.784445  604817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 14:13:34.784530  604817 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:13:34.784533  604817 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:34.784576  604817 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:34.784616  604817 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 14:13:34.784631  604817 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 14:13:34.784624  604817 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:34.784605  604817 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:34.784600  604817 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:34.786388  604817 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:34.786413  604817 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:34.786441  604817 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:34.786395  604817 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:13:34.786464  604817 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:34.786468  604817 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 14:13:34.786486  604817 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 14:13:34.786522  604817 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:34.938713  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 14:13:34.941192  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:34.947890  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:34.966377  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:34.972739  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:34.977240  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:34.981649  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 14:13:35.044023  604817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 14:13:35.044099  604817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 14:13:35.044158  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.086808  604817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 14:13:35.086869  604817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:35.086922  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.092495  604817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 14:13:35.092540  604817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:35.092587  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.121090  604817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 14:13:35.121158  604817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 14:13:35.121201  604817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:35.121255  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.121163  604817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:35.121331  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.126863  604817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 14:13:35.126898  604817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 14:13:35.126907  604817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:35.126929  604817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 14:13:35.126956  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.126966  604817 ssh_runner.go:195] Run: which crictl
	I0127 14:13:35.126968  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:13:35.127036  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:35.127059  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:35.127036  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:35.129842  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:35.141927  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:35.270587  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:35.270676  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:35.270748  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:13:35.277611  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:35.277697  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:35.277866  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:13:35.288023  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:35.434617  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 14:13:35.453827  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 14:13:35.453897  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 14:13:35.453939  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 14:13:35.453958  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 14:13:35.453992  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 14:13:35.453994  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:13:35.523104  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 14:13:35.572078  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 14:13:35.621410  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 14:13:35.621514  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 14:13:35.621603  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 14:13:35.621647  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 14:13:35.621692  604817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 14:13:35.662342  604817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 14:13:35.691029  604817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:13:35.838946  604817 cache_images.go:92] duration metric: took 1.054484177s to LoadCachedImages
	W0127 14:13:35.839037  604817 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20327-555419/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 14:13:35.839055  604817 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.20.0 crio true true} ...
	I0127 14:13:35.839186  604817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-456130 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:13:35.839332  604817 ssh_runner.go:195] Run: crio config
	I0127 14:13:35.891840  604817 cni.go:84] Creating CNI manager for ""
	I0127 14:13:35.891864  604817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:13:35.891874  604817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:13:35.891895  604817 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-456130 NodeName:old-k8s-version-456130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 14:13:35.892015  604817 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-456130"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:13:35.892074  604817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 14:13:35.903164  604817 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:13:35.903242  604817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:13:35.913232  604817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 14:13:35.931248  604817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:13:35.949594  604817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 14:13:35.975028  604817 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0127 14:13:35.979098  604817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:13:35.992511  604817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:13:36.134602  604817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:13:36.152065  604817 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130 for IP: 192.168.39.11
	I0127 14:13:36.152090  604817 certs.go:194] generating shared ca certs ...
	I0127 14:13:36.152112  604817 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:13:36.152325  604817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:13:36.152391  604817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:13:36.152414  604817 certs.go:256] generating profile certs ...
	I0127 14:13:36.152561  604817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/client.key
	I0127 14:13:36.152630  604817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key.294f913a
	I0127 14:13:36.152681  604817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key
	I0127 14:13:36.152831  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:13:36.152869  604817 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:13:36.152883  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:13:36.152917  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:13:36.152948  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:13:36.152980  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:13:36.153034  604817 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:13:36.153898  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:13:36.185130  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:13:36.214470  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:13:36.245484  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:13:36.289420  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 14:13:36.320721  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:13:36.359172  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:13:36.396808  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/old-k8s-version-456130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:13:36.434023  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:13:36.476916  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:13:36.520302  604817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:13:36.557959  604817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:13:36.582412  604817 ssh_runner.go:195] Run: openssl version
	I0127 14:13:36.588875  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:13:36.600643  604817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:13:36.605535  604817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:13:36.605646  604817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:13:36.611545  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:13:36.622319  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:13:36.632417  604817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:36.636745  604817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:36.636814  604817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:13:36.642963  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:13:36.656323  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:13:36.667977  604817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:13:36.672494  604817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:13:36.672535  604817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:13:36.678465  604817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:13:36.688837  604817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:13:36.693286  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 14:13:36.699465  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 14:13:36.705415  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 14:13:36.711345  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 14:13:36.717182  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 14:13:36.722756  604817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 14:13:36.728356  604817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-456130 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-456130 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:13:36.728447  604817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:13:36.728512  604817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:13:36.765593  604817 cri.go:89] found id: ""
	I0127 14:13:36.765656  604817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:13:36.775279  604817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 14:13:36.775299  604817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 14:13:36.775343  604817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 14:13:36.785819  604817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:13:36.786671  604817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-456130" does not appear in /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:13:36.787226  604817 kubeconfig.go:62] /home/jenkins/minikube-integration/20327-555419/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-456130" cluster setting kubeconfig missing "old-k8s-version-456130" context setting]
	I0127 14:13:36.788113  604817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:13:36.789963  604817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 14:13:36.799290  604817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.11
	I0127 14:13:36.799344  604817 kubeadm.go:1160] stopping kube-system containers ...
	I0127 14:13:36.799358  604817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 14:13:36.799399  604817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:13:36.836332  604817 cri.go:89] found id: ""
	I0127 14:13:36.836398  604817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 14:13:36.855074  604817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:13:36.865024  604817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:13:36.865061  604817 kubeadm.go:157] found existing configuration files:
	
	I0127 14:13:36.865116  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:13:36.874312  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:13:36.874368  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:13:36.883765  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:13:36.893113  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:13:36.893171  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:13:36.902745  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:13:36.911884  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:13:36.911924  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:13:36.921538  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:13:36.930594  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:13:36.930642  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:13:36.940731  604817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:13:36.951111  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:37.182333  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:38.169599  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:38.406640  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:38.515751  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 14:13:38.605929  604817 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:13:38.606023  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:39.106361  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:39.607064  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:40.106145  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:40.607082  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:41.106624  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:41.606294  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:42.106358  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:42.606317  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:43.106971  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:43.606842  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:44.106990  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:44.606787  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:45.106325  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:45.606831  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:46.106259  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:46.606763  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:47.106162  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:47.606787  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:48.106293  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:48.606275  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:49.106965  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:49.606822  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:50.106852  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:50.606673  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:51.106794  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:51.606182  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:52.106129  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:52.606789  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:53.106781  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:53.606350  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:54.106750  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:54.606815  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:55.106777  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:55.606619  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:56.106788  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:56.606824  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:57.106598  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:57.606774  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:58.106073  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:58.606783  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:59.106551  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:13:59.606592  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:00.106788  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:00.606205  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:01.106583  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:01.606509  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:02.106134  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:02.606385  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:03.106800  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:03.606814  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:04.106811  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:04.606876  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:05.106244  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:05.606404  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:06.106934  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:06.606184  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:07.106994  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:07.606495  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:08.106222  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:08.606537  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:09.106771  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:09.606592  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:10.106996  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:10.606529  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:11.106260  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:11.606561  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:12.106243  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:12.606790  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:13.106782  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:13.606158  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:14.106789  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:14.606575  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:15.106359  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:15.606058  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:16.106819  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:16.606801  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:17.107051  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:17.606378  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:18.106804  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:18.606783  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:19.106253  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:19.606795  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:20.106682  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:20.606760  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:21.106203  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:21.606898  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:22.106365  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:22.606909  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:23.106488  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:23.606462  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:24.106395  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:24.607067  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:25.106515  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:25.606788  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:26.106952  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:26.606775  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:27.106432  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:27.606278  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:28.106637  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:28.606527  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:29.106795  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:29.606681  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:30.106804  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:30.606904  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:31.106815  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:31.606797  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:32.106554  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:32.606398  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:33.106787  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:33.606819  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:34.106454  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:34.606786  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:35.106792  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:35.606251  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:36.106772  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:36.606618  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:37.106557  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:37.606254  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:38.106284  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:38.606350  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:38.606435  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:38.644454  604817 cri.go:89] found id: ""
	I0127 14:14:38.644481  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.644489  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:38.644495  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:38.644566  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:38.683243  604817 cri.go:89] found id: ""
	I0127 14:14:38.683274  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.683282  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:38.683287  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:38.683338  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:38.716898  604817 cri.go:89] found id: ""
	I0127 14:14:38.716937  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.716950  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:38.716957  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:38.717020  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:38.751646  604817 cri.go:89] found id: ""
	I0127 14:14:38.751677  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.751688  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:38.751710  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:38.751778  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:38.784343  604817 cri.go:89] found id: ""
	I0127 14:14:38.784375  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.784385  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:38.784392  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:38.784452  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:38.818305  604817 cri.go:89] found id: ""
	I0127 14:14:38.818338  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.818349  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:38.818372  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:38.818430  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:38.854499  604817 cri.go:89] found id: ""
	I0127 14:14:38.854521  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.854529  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:38.854534  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:38.854581  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:38.888391  604817 cri.go:89] found id: ""
	I0127 14:14:38.888417  604817 logs.go:282] 0 containers: []
	W0127 14:14:38.888424  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:38.888433  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:38.888457  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:38.965383  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:38.965434  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:39.005541  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:39.005657  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:39.060498  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:39.060546  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:39.074144  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:39.074176  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:39.200957  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:41.701638  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:41.715904  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:41.715989  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:41.751823  604817 cri.go:89] found id: ""
	I0127 14:14:41.751859  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.751871  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:41.751880  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:41.751944  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:41.789007  604817 cri.go:89] found id: ""
	I0127 14:14:41.789037  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.789047  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:41.789055  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:41.789132  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:41.823295  604817 cri.go:89] found id: ""
	I0127 14:14:41.823324  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.823340  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:41.823350  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:41.823412  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:41.856854  604817 cri.go:89] found id: ""
	I0127 14:14:41.856878  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.856886  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:41.856892  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:41.856937  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:41.891118  604817 cri.go:89] found id: ""
	I0127 14:14:41.891146  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.891154  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:41.891161  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:41.891215  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:41.937363  604817 cri.go:89] found id: ""
	I0127 14:14:41.937393  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.937405  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:41.937414  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:41.937478  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:41.971979  604817 cri.go:89] found id: ""
	I0127 14:14:41.972011  604817 logs.go:282] 0 containers: []
	W0127 14:14:41.972023  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:41.972032  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:41.972108  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:42.008013  604817 cri.go:89] found id: ""
	I0127 14:14:42.008048  604817 logs.go:282] 0 containers: []
	W0127 14:14:42.008061  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:42.008077  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:42.008100  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:42.043903  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:42.043936  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:42.101447  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:42.101476  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:42.114894  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:42.114928  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:42.195634  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:42.195661  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:42.195676  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:44.766773  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:44.780792  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:44.780859  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:44.822819  604817 cri.go:89] found id: ""
	I0127 14:14:44.822849  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.822859  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:44.822870  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:44.822932  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:44.861735  604817 cri.go:89] found id: ""
	I0127 14:14:44.861762  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.861773  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:44.861781  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:44.861844  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:44.895789  604817 cri.go:89] found id: ""
	I0127 14:14:44.895812  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.895819  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:44.895828  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:44.895881  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:44.929507  604817 cri.go:89] found id: ""
	I0127 14:14:44.929535  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.929547  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:44.929555  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:44.929635  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:44.961357  604817 cri.go:89] found id: ""
	I0127 14:14:44.961375  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.961382  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:44.961387  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:44.961440  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:44.994697  604817 cri.go:89] found id: ""
	I0127 14:14:44.994725  604817 logs.go:282] 0 containers: []
	W0127 14:14:44.994733  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:44.994739  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:44.994791  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:45.028946  604817 cri.go:89] found id: ""
	I0127 14:14:45.028975  604817 logs.go:282] 0 containers: []
	W0127 14:14:45.028983  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:45.028988  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:45.029045  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:45.062210  604817 cri.go:89] found id: ""
	I0127 14:14:45.062243  604817 logs.go:282] 0 containers: []
	W0127 14:14:45.062251  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:45.062261  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:45.062274  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:45.113681  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:45.113704  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:45.126881  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:45.126909  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:45.194571  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:45.194588  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:45.194601  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:45.271498  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:45.271526  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:47.811673  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:47.828679  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:47.828763  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:47.866667  604817 cri.go:89] found id: ""
	I0127 14:14:47.866703  604817 logs.go:282] 0 containers: []
	W0127 14:14:47.866715  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:47.866724  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:47.866786  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:47.907432  604817 cri.go:89] found id: ""
	I0127 14:14:47.907457  604817 logs.go:282] 0 containers: []
	W0127 14:14:47.907468  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:47.907476  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:47.907537  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:47.943416  604817 cri.go:89] found id: ""
	I0127 14:14:47.943446  604817 logs.go:282] 0 containers: []
	W0127 14:14:47.943457  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:47.943466  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:47.943526  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:47.976260  604817 cri.go:89] found id: ""
	I0127 14:14:47.976285  604817 logs.go:282] 0 containers: []
	W0127 14:14:47.976296  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:47.976304  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:47.976366  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:48.016146  604817 cri.go:89] found id: ""
	I0127 14:14:48.016176  604817 logs.go:282] 0 containers: []
	W0127 14:14:48.016187  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:48.016195  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:48.016259  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:48.054696  604817 cri.go:89] found id: ""
	I0127 14:14:48.054722  604817 logs.go:282] 0 containers: []
	W0127 14:14:48.054731  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:48.054737  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:48.054801  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:48.091659  604817 cri.go:89] found id: ""
	I0127 14:14:48.091684  604817 logs.go:282] 0 containers: []
	W0127 14:14:48.091692  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:48.091698  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:48.091765  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:48.127350  604817 cri.go:89] found id: ""
	I0127 14:14:48.127375  604817 logs.go:282] 0 containers: []
	W0127 14:14:48.127383  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:48.127393  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:48.127404  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:48.227141  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:48.227169  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:48.265238  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:48.265267  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:48.332136  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:48.332167  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:48.357860  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:48.357891  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:48.448495  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:50.949440  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:50.965188  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:50.965275  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:50.999505  604817 cri.go:89] found id: ""
	I0127 14:14:50.999535  604817 logs.go:282] 0 containers: []
	W0127 14:14:50.999543  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:50.999552  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:50.999606  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:51.035074  604817 cri.go:89] found id: ""
	I0127 14:14:51.035105  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.035120  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:51.035128  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:51.035190  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:51.069315  604817 cri.go:89] found id: ""
	I0127 14:14:51.069340  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.069349  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:51.069356  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:51.069409  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:51.102071  604817 cri.go:89] found id: ""
	I0127 14:14:51.102100  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.102113  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:51.102124  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:51.102182  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:51.135207  604817 cri.go:89] found id: ""
	I0127 14:14:51.135230  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.135236  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:51.135242  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:51.135283  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:51.170430  604817 cri.go:89] found id: ""
	I0127 14:14:51.170456  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.170465  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:51.170473  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:51.170526  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:51.205187  604817 cri.go:89] found id: ""
	I0127 14:14:51.205225  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.205237  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:51.205247  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:51.205323  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:51.245657  604817 cri.go:89] found id: ""
	I0127 14:14:51.245685  604817 logs.go:282] 0 containers: []
	W0127 14:14:51.245695  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:51.245705  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:51.245715  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:51.318957  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:51.318987  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:51.335155  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:51.335179  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:51.425837  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:51.425859  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:51.425874  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:51.503037  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:51.503069  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:54.044799  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:54.059156  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:54.059232  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:54.108622  604817 cri.go:89] found id: ""
	I0127 14:14:54.108654  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.108666  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:54.108675  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:54.108742  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:54.158791  604817 cri.go:89] found id: ""
	I0127 14:14:54.158832  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.158843  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:54.158851  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:54.158923  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:54.193337  604817 cri.go:89] found id: ""
	I0127 14:14:54.193364  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.193371  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:54.193377  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:54.193437  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:54.223890  604817 cri.go:89] found id: ""
	I0127 14:14:54.223912  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.223919  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:54.223924  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:54.223977  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:54.257701  604817 cri.go:89] found id: ""
	I0127 14:14:54.257723  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.257729  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:54.257735  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:54.257791  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:54.290987  604817 cri.go:89] found id: ""
	I0127 14:14:54.291008  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.291018  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:54.291026  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:54.291074  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:54.324676  604817 cri.go:89] found id: ""
	I0127 14:14:54.324699  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.324705  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:54.324710  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:54.324763  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:54.362001  604817 cri.go:89] found id: ""
	I0127 14:14:54.362027  604817 logs.go:282] 0 containers: []
	W0127 14:14:54.362034  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:54.362043  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:54.362053  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:54.440256  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:54.440284  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:14:54.482346  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:54.482376  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:54.532857  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:54.532887  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:54.546349  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:54.546374  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:54.617449  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:57.118083  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:14:57.131390  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:14:57.131461  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:14:57.164611  604817 cri.go:89] found id: ""
	I0127 14:14:57.164636  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.164646  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:14:57.164654  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:14:57.164719  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:14:57.200163  604817 cri.go:89] found id: ""
	I0127 14:14:57.200187  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.200194  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:14:57.200200  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:14:57.200254  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:14:57.231588  604817 cri.go:89] found id: ""
	I0127 14:14:57.231613  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.231622  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:14:57.231630  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:14:57.231687  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:14:57.264972  604817 cri.go:89] found id: ""
	I0127 14:14:57.265000  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.265008  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:14:57.265013  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:14:57.265066  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:14:57.297953  604817 cri.go:89] found id: ""
	I0127 14:14:57.297975  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.297982  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:14:57.297987  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:14:57.298089  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:14:57.332646  604817 cri.go:89] found id: ""
	I0127 14:14:57.332665  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.332671  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:14:57.332677  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:14:57.332752  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:14:57.364903  604817 cri.go:89] found id: ""
	I0127 14:14:57.364925  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.364932  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:14:57.364937  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:14:57.364985  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:14:57.396975  604817 cri.go:89] found id: ""
	I0127 14:14:57.397000  604817 logs.go:282] 0 containers: []
	W0127 14:14:57.397009  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:14:57.397021  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:14:57.397034  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:14:57.449304  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:14:57.449341  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:14:57.463693  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:14:57.463719  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:14:57.531290  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:14:57.531312  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:14:57.531325  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:14:57.610509  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:14:57.610542  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:00.152765  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:00.165303  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:00.165361  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:00.202916  604817 cri.go:89] found id: ""
	I0127 14:15:00.202947  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.202959  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:00.202970  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:00.203034  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:00.233842  604817 cri.go:89] found id: ""
	I0127 14:15:00.233874  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.233884  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:00.233892  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:00.233953  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:00.266537  604817 cri.go:89] found id: ""
	I0127 14:15:00.266559  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.266566  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:00.266572  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:00.266619  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:00.298196  604817 cri.go:89] found id: ""
	I0127 14:15:00.298221  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.298230  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:00.298238  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:00.298296  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:00.334423  604817 cri.go:89] found id: ""
	I0127 14:15:00.334446  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.334453  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:00.334459  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:00.334502  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:00.367488  604817 cri.go:89] found id: ""
	I0127 14:15:00.367516  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.367527  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:00.367534  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:00.367590  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:00.398875  604817 cri.go:89] found id: ""
	I0127 14:15:00.398908  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.398920  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:00.398939  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:00.399007  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:00.432755  604817 cri.go:89] found id: ""
	I0127 14:15:00.432784  604817 logs.go:282] 0 containers: []
	W0127 14:15:00.432794  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:00.432808  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:00.432822  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:00.482830  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:00.482855  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:00.495833  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:00.495853  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:00.565516  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:00.565540  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:00.565574  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:00.640505  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:00.640533  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:03.179940  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:03.194755  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:03.194830  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:03.228330  604817 cri.go:89] found id: ""
	I0127 14:15:03.228359  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.228365  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:03.228371  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:03.228427  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:03.271086  604817 cri.go:89] found id: ""
	I0127 14:15:03.271112  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.271121  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:03.271128  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:03.271201  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:03.303690  604817 cri.go:89] found id: ""
	I0127 14:15:03.303714  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.303723  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:03.303730  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:03.303783  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:03.339451  604817 cri.go:89] found id: ""
	I0127 14:15:03.339477  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.339486  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:03.339494  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:03.339545  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:03.372474  604817 cri.go:89] found id: ""
	I0127 14:15:03.372501  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.372510  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:03.372516  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:03.372575  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:03.411373  604817 cri.go:89] found id: ""
	I0127 14:15:03.411401  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.411410  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:03.411417  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:03.411468  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:03.448349  604817 cri.go:89] found id: ""
	I0127 14:15:03.448379  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.448391  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:03.448400  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:03.448472  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:03.487461  604817 cri.go:89] found id: ""
	I0127 14:15:03.487487  604817 logs.go:282] 0 containers: []
	W0127 14:15:03.487498  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:03.487510  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:03.487524  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:03.538736  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:03.538763  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:03.553230  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:03.553260  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:03.626344  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:03.626377  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:03.626395  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:03.714853  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:03.714886  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:06.258305  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:06.271421  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:06.271515  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:06.306558  604817 cri.go:89] found id: ""
	I0127 14:15:06.306582  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.306592  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:06.306602  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:06.306655  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:06.342838  604817 cri.go:89] found id: ""
	I0127 14:15:06.342867  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.342876  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:06.342891  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:06.342946  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:06.385221  604817 cri.go:89] found id: ""
	I0127 14:15:06.385247  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.385257  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:06.385266  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:06.385332  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:06.429365  604817 cri.go:89] found id: ""
	I0127 14:15:06.429389  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.429398  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:06.429406  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:06.429464  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:06.466265  604817 cri.go:89] found id: ""
	I0127 14:15:06.466298  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.466310  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:06.466318  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:06.466392  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:06.502700  604817 cri.go:89] found id: ""
	I0127 14:15:06.502734  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.502745  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:06.502754  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:06.502825  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:06.536798  604817 cri.go:89] found id: ""
	I0127 14:15:06.536830  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.536841  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:06.536848  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:06.536907  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:06.579254  604817 cri.go:89] found id: ""
	I0127 14:15:06.579282  604817 logs.go:282] 0 containers: []
	W0127 14:15:06.579289  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:06.579299  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:06.579316  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:06.595106  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:06.595139  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:06.672015  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:06.672042  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:06.672057  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:06.767052  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:06.767094  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:06.812031  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:06.812067  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:09.367573  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:09.380289  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:09.380357  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:09.414985  604817 cri.go:89] found id: ""
	I0127 14:15:09.415017  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.415025  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:09.415031  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:09.415078  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:09.448336  604817 cri.go:89] found id: ""
	I0127 14:15:09.448359  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.448368  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:09.448375  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:09.448430  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:09.490602  604817 cri.go:89] found id: ""
	I0127 14:15:09.490628  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.490638  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:09.490645  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:09.490703  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:09.523978  604817 cri.go:89] found id: ""
	I0127 14:15:09.524004  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.524014  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:09.524021  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:09.524067  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:09.558353  604817 cri.go:89] found id: ""
	I0127 14:15:09.558381  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.558391  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:09.558399  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:09.558457  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:09.591549  604817 cri.go:89] found id: ""
	I0127 14:15:09.591580  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.591589  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:09.591595  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:09.591643  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:09.624589  604817 cri.go:89] found id: ""
	I0127 14:15:09.624619  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.624630  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:09.624646  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:09.624707  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:09.665077  604817 cri.go:89] found id: ""
	I0127 14:15:09.665106  604817 logs.go:282] 0 containers: []
	W0127 14:15:09.665114  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:09.665125  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:09.665136  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:09.679394  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:09.679421  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:09.756759  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:09.756791  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:09.756810  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:09.835677  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:09.835706  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:09.873611  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:09.873648  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:12.420527  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:12.434582  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:12.434638  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:12.477312  604817 cri.go:89] found id: ""
	I0127 14:15:12.477339  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.477349  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:12.477355  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:12.477416  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:12.510691  604817 cri.go:89] found id: ""
	I0127 14:15:12.510715  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.510723  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:12.510729  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:12.510776  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:12.544057  604817 cri.go:89] found id: ""
	I0127 14:15:12.544086  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.544097  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:12.544105  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:12.544196  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:12.581963  604817 cri.go:89] found id: ""
	I0127 14:15:12.581991  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.582000  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:12.582006  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:12.582070  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:12.617283  604817 cri.go:89] found id: ""
	I0127 14:15:12.617308  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.617316  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:12.617325  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:12.617386  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:12.650667  604817 cri.go:89] found id: ""
	I0127 14:15:12.650692  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.650699  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:12.650705  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:12.650755  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:12.688191  604817 cri.go:89] found id: ""
	I0127 14:15:12.688214  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.688222  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:12.688227  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:12.688275  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:12.728832  604817 cri.go:89] found id: ""
	I0127 14:15:12.728852  604817 logs.go:282] 0 containers: []
	W0127 14:15:12.728859  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:12.728871  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:12.728881  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:12.781644  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:12.781672  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:12.794512  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:12.794535  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:12.869637  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:12.869663  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:12.869680  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:12.944627  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:12.944655  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:15.482906  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:15.496020  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:15.496083  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:15.529549  604817 cri.go:89] found id: ""
	I0127 14:15:15.529575  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.529598  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:15.529606  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:15.529665  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:15.563517  604817 cri.go:89] found id: ""
	I0127 14:15:15.563540  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.563548  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:15.563553  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:15.563599  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:15.594178  604817 cri.go:89] found id: ""
	I0127 14:15:15.594201  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.594208  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:15.594214  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:15.594260  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:15.625047  604817 cri.go:89] found id: ""
	I0127 14:15:15.625077  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.625084  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:15.625089  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:15.625146  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:15.664837  604817 cri.go:89] found id: ""
	I0127 14:15:15.664861  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.664868  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:15.664873  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:15.664931  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:15.695608  604817 cri.go:89] found id: ""
	I0127 14:15:15.695632  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.695641  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:15.695649  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:15.695701  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:15.731581  604817 cri.go:89] found id: ""
	I0127 14:15:15.731610  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.731620  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:15.731630  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:15.731693  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:15.770761  604817 cri.go:89] found id: ""
	I0127 14:15:15.770785  604817 logs.go:282] 0 containers: []
	W0127 14:15:15.770793  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:15.770802  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:15.770814  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:15.823619  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:15.823652  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:15.837415  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:15.837436  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:15.908387  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:15.908414  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:15.908432  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:15.985945  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:15.985974  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:18.524054  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:18.540080  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:18.540164  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:18.579669  604817 cri.go:89] found id: ""
	I0127 14:15:18.579696  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.579707  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:18.579715  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:18.579773  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:18.618687  604817 cri.go:89] found id: ""
	I0127 14:15:18.618714  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.618725  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:18.618732  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:18.618793  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:18.659144  604817 cri.go:89] found id: ""
	I0127 14:15:18.659165  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.659172  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:18.659178  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:18.659225  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:18.696171  604817 cri.go:89] found id: ""
	I0127 14:15:18.696193  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.696200  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:18.696205  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:18.696257  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:18.727038  604817 cri.go:89] found id: ""
	I0127 14:15:18.727059  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.727065  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:18.727071  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:18.727115  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:18.758436  604817 cri.go:89] found id: ""
	I0127 14:15:18.758460  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.758469  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:18.758476  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:18.758532  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:18.792056  604817 cri.go:89] found id: ""
	I0127 14:15:18.792083  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.792093  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:18.792104  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:18.792170  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:18.829924  604817 cri.go:89] found id: ""
	I0127 14:15:18.829951  604817 logs.go:282] 0 containers: []
	W0127 14:15:18.829961  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:18.829973  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:18.829988  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:18.883381  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:18.883414  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:18.896722  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:18.896748  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:18.968994  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:18.969013  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:18.969028  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:19.049806  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:19.049838  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:21.596576  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:21.610977  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:21.611051  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:21.650025  604817 cri.go:89] found id: ""
	I0127 14:15:21.650053  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.650064  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:21.650072  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:21.650145  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:21.686037  604817 cri.go:89] found id: ""
	I0127 14:15:21.686068  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.686078  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:21.686086  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:21.686155  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:21.721980  604817 cri.go:89] found id: ""
	I0127 14:15:21.722013  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.722024  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:21.722033  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:21.722106  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:21.753990  604817 cri.go:89] found id: ""
	I0127 14:15:21.754019  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.754029  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:21.754036  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:21.754092  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:21.786247  604817 cri.go:89] found id: ""
	I0127 14:15:21.786273  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.786284  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:21.786291  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:21.786353  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:21.820770  604817 cri.go:89] found id: ""
	I0127 14:15:21.820794  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.820802  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:21.820810  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:21.820865  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:21.855605  604817 cri.go:89] found id: ""
	I0127 14:15:21.855633  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.855647  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:21.855655  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:21.855714  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:21.887415  604817 cri.go:89] found id: ""
	I0127 14:15:21.887441  604817 logs.go:282] 0 containers: []
	W0127 14:15:21.887454  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:21.887466  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:21.887481  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:21.954863  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:21.954889  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:21.954905  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:22.042420  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:22.042461  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:22.088836  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:22.088877  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:22.146690  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:22.146717  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:24.675207  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:24.690144  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:24.690199  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:24.731005  604817 cri.go:89] found id: ""
	I0127 14:15:24.731035  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.731046  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:24.731053  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:24.731112  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:24.768349  604817 cri.go:89] found id: ""
	I0127 14:15:24.768380  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.768390  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:24.768398  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:24.768462  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:24.803976  604817 cri.go:89] found id: ""
	I0127 14:15:24.804004  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.804017  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:24.804024  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:24.804086  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:24.837638  604817 cri.go:89] found id: ""
	I0127 14:15:24.837665  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.837674  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:24.837681  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:24.837740  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:24.876354  604817 cri.go:89] found id: ""
	I0127 14:15:24.876381  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.876392  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:24.876399  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:24.876455  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:24.913397  604817 cri.go:89] found id: ""
	I0127 14:15:24.913427  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.913437  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:24.913443  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:24.913498  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:24.948058  604817 cri.go:89] found id: ""
	I0127 14:15:24.948082  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.948092  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:24.948099  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:24.948145  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:24.979859  604817 cri.go:89] found id: ""
	I0127 14:15:24.979885  604817 logs.go:282] 0 containers: []
	W0127 14:15:24.979895  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:24.979908  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:24.979924  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:25.056583  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:25.056607  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:25.056622  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:25.134264  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:25.134293  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:25.175034  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:25.175077  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:25.229148  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:25.229176  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:27.745682  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:27.758866  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:27.758936  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:27.792735  604817 cri.go:89] found id: ""
	I0127 14:15:27.792765  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.792777  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:27.792785  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:27.792840  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:27.830569  604817 cri.go:89] found id: ""
	I0127 14:15:27.830594  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.830608  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:27.830617  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:27.830667  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:27.864882  604817 cri.go:89] found id: ""
	I0127 14:15:27.864908  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.864920  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:27.864929  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:27.864986  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:27.904366  604817 cri.go:89] found id: ""
	I0127 14:15:27.904395  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.904402  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:27.904408  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:27.904466  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:27.951355  604817 cri.go:89] found id: ""
	I0127 14:15:27.951385  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.951396  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:27.951425  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:27.951494  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:27.983424  604817 cri.go:89] found id: ""
	I0127 14:15:27.983454  604817 logs.go:282] 0 containers: []
	W0127 14:15:27.983465  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:27.983473  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:27.983529  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:28.015728  604817 cri.go:89] found id: ""
	I0127 14:15:28.015762  604817 logs.go:282] 0 containers: []
	W0127 14:15:28.015772  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:28.015780  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:28.015839  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:28.049907  604817 cri.go:89] found id: ""
	I0127 14:15:28.049937  604817 logs.go:282] 0 containers: []
	W0127 14:15:28.049948  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:28.049961  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:28.049976  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:28.099295  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:28.099323  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:28.112504  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:28.112532  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:28.185355  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:28.185379  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:28.185395  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:28.268255  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:28.268290  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:30.811602  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:30.827459  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:30.827536  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:30.862901  604817 cri.go:89] found id: ""
	I0127 14:15:30.862926  604817 logs.go:282] 0 containers: []
	W0127 14:15:30.862934  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:30.862940  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:30.862987  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:30.896684  604817 cri.go:89] found id: ""
	I0127 14:15:30.896720  604817 logs.go:282] 0 containers: []
	W0127 14:15:30.896732  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:30.896742  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:30.896795  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:30.932515  604817 cri.go:89] found id: ""
	I0127 14:15:30.932543  604817 logs.go:282] 0 containers: []
	W0127 14:15:30.932552  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:30.932560  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:30.932625  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:30.966334  604817 cri.go:89] found id: ""
	I0127 14:15:30.966357  604817 logs.go:282] 0 containers: []
	W0127 14:15:30.966371  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:30.966379  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:30.966434  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:31.002134  604817 cri.go:89] found id: ""
	I0127 14:15:31.002158  604817 logs.go:282] 0 containers: []
	W0127 14:15:31.002166  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:31.002174  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:31.002236  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:31.035900  604817 cri.go:89] found id: ""
	I0127 14:15:31.035927  604817 logs.go:282] 0 containers: []
	W0127 14:15:31.035937  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:31.035945  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:31.036006  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:31.070763  604817 cri.go:89] found id: ""
	I0127 14:15:31.070790  604817 logs.go:282] 0 containers: []
	W0127 14:15:31.070800  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:31.070807  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:31.070864  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:31.101233  604817 cri.go:89] found id: ""
	I0127 14:15:31.101259  604817 logs.go:282] 0 containers: []
	W0127 14:15:31.101268  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:31.101281  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:31.101296  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:31.174263  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:31.174291  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:31.211236  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:31.211263  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:31.258953  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:31.258978  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:31.271929  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:31.271953  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:31.341559  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:33.842119  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:33.856268  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:33.856362  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:33.893020  604817 cri.go:89] found id: ""
	I0127 14:15:33.893045  604817 logs.go:282] 0 containers: []
	W0127 14:15:33.893053  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:33.893058  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:33.893102  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:33.925866  604817 cri.go:89] found id: ""
	I0127 14:15:33.925896  604817 logs.go:282] 0 containers: []
	W0127 14:15:33.925905  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:33.925911  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:33.925963  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:33.961887  604817 cri.go:89] found id: ""
	I0127 14:15:33.961918  604817 logs.go:282] 0 containers: []
	W0127 14:15:33.961930  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:33.961939  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:33.962000  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:33.996591  604817 cri.go:89] found id: ""
	I0127 14:15:33.996623  604817 logs.go:282] 0 containers: []
	W0127 14:15:33.996637  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:33.996645  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:33.996703  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:34.032977  604817 cri.go:89] found id: ""
	I0127 14:15:34.033003  604817 logs.go:282] 0 containers: []
	W0127 14:15:34.033010  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:34.033015  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:34.033067  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:34.066934  604817 cri.go:89] found id: ""
	I0127 14:15:34.066959  604817 logs.go:282] 0 containers: []
	W0127 14:15:34.066967  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:34.066973  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:34.067022  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:34.101622  604817 cri.go:89] found id: ""
	I0127 14:15:34.101651  604817 logs.go:282] 0 containers: []
	W0127 14:15:34.101661  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:34.101668  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:34.101714  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:34.145139  604817 cri.go:89] found id: ""
	I0127 14:15:34.145162  604817 logs.go:282] 0 containers: []
	W0127 14:15:34.145170  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:34.145181  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:34.145195  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:34.194318  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:34.194342  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:34.206943  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:34.206963  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:34.278710  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:34.278739  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:34.278754  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:34.367506  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:34.367537  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:36.919989  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:36.933200  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:36.933265  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:36.970106  604817 cri.go:89] found id: ""
	I0127 14:15:36.970131  604817 logs.go:282] 0 containers: []
	W0127 14:15:36.970139  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:36.970146  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:36.970196  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:37.002507  604817 cri.go:89] found id: ""
	I0127 14:15:37.002529  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.002536  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:37.002542  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:37.002589  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:37.034738  604817 cri.go:89] found id: ""
	I0127 14:15:37.034762  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.034772  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:37.034780  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:37.034825  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:37.066132  604817 cri.go:89] found id: ""
	I0127 14:15:37.066160  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.066169  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:37.066175  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:37.066224  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:37.110379  604817 cri.go:89] found id: ""
	I0127 14:15:37.110415  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.110427  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:37.110436  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:37.110508  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:37.150958  604817 cri.go:89] found id: ""
	I0127 14:15:37.150995  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.151007  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:37.151016  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:37.151083  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:37.188018  604817 cri.go:89] found id: ""
	I0127 14:15:37.188058  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.188071  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:37.188079  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:37.188142  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:37.222118  604817 cri.go:89] found id: ""
	I0127 14:15:37.222146  604817 logs.go:282] 0 containers: []
	W0127 14:15:37.222154  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:37.222165  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:37.222177  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:37.274349  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:37.274386  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:37.289334  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:37.289377  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:37.367400  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:37.367423  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:37.367437  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:37.444826  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:37.444866  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:39.985717  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:40.002231  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:40.002306  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:40.037553  604817 cri.go:89] found id: ""
	I0127 14:15:40.037592  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.037606  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:40.037614  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:40.037672  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:40.080841  604817 cri.go:89] found id: ""
	I0127 14:15:40.080871  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.080882  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:40.080891  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:40.080953  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:40.132628  604817 cri.go:89] found id: ""
	I0127 14:15:40.132657  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.132668  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:40.132675  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:40.132738  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:40.170240  604817 cri.go:89] found id: ""
	I0127 14:15:40.170276  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.170288  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:40.170298  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:40.170368  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:40.214065  604817 cri.go:89] found id: ""
	I0127 14:15:40.214099  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.214111  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:40.214119  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:40.214228  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:40.254429  604817 cri.go:89] found id: ""
	I0127 14:15:40.254460  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.254470  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:40.254479  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:40.254529  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:40.291412  604817 cri.go:89] found id: ""
	I0127 14:15:40.291444  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.291455  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:40.291463  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:40.291528  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:40.326938  604817 cri.go:89] found id: ""
	I0127 14:15:40.326965  604817 logs.go:282] 0 containers: []
	W0127 14:15:40.326974  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:40.326986  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:40.327002  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:40.341343  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:40.341376  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:40.410038  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:40.410061  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:40.410077  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:40.488225  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:40.488258  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:40.528546  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:40.528584  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:43.090794  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:43.106384  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:43.106459  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:43.138982  604817 cri.go:89] found id: ""
	I0127 14:15:43.139011  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.139022  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:43.139030  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:43.139089  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:43.171092  604817 cri.go:89] found id: ""
	I0127 14:15:43.171115  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.171123  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:43.171128  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:43.171186  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:43.204487  604817 cri.go:89] found id: ""
	I0127 14:15:43.204517  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.204528  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:43.204535  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:43.204590  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:43.244316  604817 cri.go:89] found id: ""
	I0127 14:15:43.244343  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.244354  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:43.244362  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:43.244438  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:43.278981  604817 cri.go:89] found id: ""
	I0127 14:15:43.279008  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.279020  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:43.279027  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:43.279140  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:43.313377  604817 cri.go:89] found id: ""
	I0127 14:15:43.313407  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.313417  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:43.313426  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:43.313485  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:43.351440  604817 cri.go:89] found id: ""
	I0127 14:15:43.351465  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.351473  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:43.351479  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:43.351530  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:43.383455  604817 cri.go:89] found id: ""
	I0127 14:15:43.383478  604817 logs.go:282] 0 containers: []
	W0127 14:15:43.383488  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:43.383500  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:43.383511  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:43.433106  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:43.433141  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:43.445965  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:43.445989  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:43.513149  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:43.513173  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:43.513189  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:43.589884  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:43.589916  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:46.130039  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:46.148375  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:46.148459  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:46.181202  604817 cri.go:89] found id: ""
	I0127 14:15:46.181231  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.181248  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:46.181258  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:46.181323  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:46.212905  604817 cri.go:89] found id: ""
	I0127 14:15:46.212939  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.212950  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:46.212958  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:46.213012  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:46.245794  604817 cri.go:89] found id: ""
	I0127 14:15:46.245822  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.245832  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:46.245840  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:46.245897  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:46.280471  604817 cri.go:89] found id: ""
	I0127 14:15:46.280498  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.280508  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:46.280515  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:46.280568  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:46.317124  604817 cri.go:89] found id: ""
	I0127 14:15:46.317151  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.317161  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:46.317168  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:46.317212  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:46.350323  604817 cri.go:89] found id: ""
	I0127 14:15:46.350350  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.350356  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:46.350361  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:46.350414  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:46.390193  604817 cri.go:89] found id: ""
	I0127 14:15:46.390223  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.390234  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:46.390244  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:46.390335  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:46.422052  604817 cri.go:89] found id: ""
	I0127 14:15:46.422073  604817 logs.go:282] 0 containers: []
	W0127 14:15:46.422081  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:46.422090  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:46.422102  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:46.487762  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:46.487786  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:46.487802  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:46.562360  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:46.562393  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:46.607312  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:46.607356  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:46.658673  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:46.658703  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:49.173714  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:49.186293  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:49.186374  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:49.220240  604817 cri.go:89] found id: ""
	I0127 14:15:49.220268  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.220275  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:49.220287  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:49.220339  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:49.256805  604817 cri.go:89] found id: ""
	I0127 14:15:49.256826  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.256834  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:49.256840  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:49.256892  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:49.289851  604817 cri.go:89] found id: ""
	I0127 14:15:49.289874  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.289890  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:49.289898  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:49.289959  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:49.329416  604817 cri.go:89] found id: ""
	I0127 14:15:49.329463  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.329474  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:49.329483  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:49.329533  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:49.361039  604817 cri.go:89] found id: ""
	I0127 14:15:49.361062  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.361070  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:49.361075  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:49.361119  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:49.391961  604817 cri.go:89] found id: ""
	I0127 14:15:49.391987  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.391994  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:49.392004  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:49.392054  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:49.424696  604817 cri.go:89] found id: ""
	I0127 14:15:49.424716  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.424723  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:49.424728  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:49.424777  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:49.458111  604817 cri.go:89] found id: ""
	I0127 14:15:49.458135  604817 logs.go:282] 0 containers: []
	W0127 14:15:49.458146  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:49.458160  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:49.458178  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:49.471918  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:49.472001  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:49.547178  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:49.547208  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:49.547226  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:49.622013  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:49.622055  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:49.665828  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:49.665855  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:52.221747  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:52.235301  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:52.235363  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:52.273908  604817 cri.go:89] found id: ""
	I0127 14:15:52.273932  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.273939  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:52.273947  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:52.274005  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:52.310327  604817 cri.go:89] found id: ""
	I0127 14:15:52.310349  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.310356  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:52.310362  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:52.310412  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:52.345963  604817 cri.go:89] found id: ""
	I0127 14:15:52.345986  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.345994  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:52.346000  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:52.346045  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:52.387140  604817 cri.go:89] found id: ""
	I0127 14:15:52.387172  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.387182  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:52.387191  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:52.387257  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:52.422473  604817 cri.go:89] found id: ""
	I0127 14:15:52.422511  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.422525  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:52.422532  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:52.422597  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:52.455075  604817 cri.go:89] found id: ""
	I0127 14:15:52.455107  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.455126  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:52.455133  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:52.455194  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:52.486457  604817 cri.go:89] found id: ""
	I0127 14:15:52.486483  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.486491  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:52.486496  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:52.486540  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:52.517140  604817 cri.go:89] found id: ""
	I0127 14:15:52.517169  604817 logs.go:282] 0 containers: []
	W0127 14:15:52.517179  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:52.517191  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:52.517210  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:52.569534  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:52.569562  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:52.583217  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:52.583242  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:52.651531  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:52.651558  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:52.651576  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:52.725089  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:52.725119  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:55.262790  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:55.276012  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:55.276077  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:55.311116  604817 cri.go:89] found id: ""
	I0127 14:15:55.311146  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.311153  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:55.311159  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:55.311216  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:55.346275  604817 cri.go:89] found id: ""
	I0127 14:15:55.346303  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.346312  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:55.346320  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:55.346379  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:55.378417  604817 cri.go:89] found id: ""
	I0127 14:15:55.378439  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.378447  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:55.378453  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:55.378496  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:55.411176  604817 cri.go:89] found id: ""
	I0127 14:15:55.411207  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.411218  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:55.411226  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:55.411285  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:55.443072  604817 cri.go:89] found id: ""
	I0127 14:15:55.443099  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.443109  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:55.443117  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:55.443176  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:55.478435  604817 cri.go:89] found id: ""
	I0127 14:15:55.478466  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.478478  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:55.478487  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:55.478557  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:55.513495  604817 cri.go:89] found id: ""
	I0127 14:15:55.513519  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.513526  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:55.513531  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:55.513591  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:55.546649  604817 cri.go:89] found id: ""
	I0127 14:15:55.546670  604817 logs.go:282] 0 containers: []
	W0127 14:15:55.546677  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:55.546687  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:55.546703  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:55.597065  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:55.597096  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:15:55.610228  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:55.610249  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:55.677715  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:55.677741  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:55.677757  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:55.763185  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:55.763219  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:58.304283  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:15:58.317312  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:15:58.317384  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:15:58.354186  604817 cri.go:89] found id: ""
	I0127 14:15:58.354215  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.354223  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:15:58.354230  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:15:58.354283  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:15:58.392109  604817 cri.go:89] found id: ""
	I0127 14:15:58.392149  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.392162  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:15:58.392170  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:15:58.392235  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:15:58.424379  604817 cri.go:89] found id: ""
	I0127 14:15:58.424411  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.424422  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:15:58.424430  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:15:58.424481  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:15:58.454364  604817 cri.go:89] found id: ""
	I0127 14:15:58.454393  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.454403  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:15:58.454412  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:15:58.454471  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:15:58.491877  604817 cri.go:89] found id: ""
	I0127 14:15:58.491907  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.491918  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:15:58.491927  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:15:58.492017  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:15:58.525519  604817 cri.go:89] found id: ""
	I0127 14:15:58.525541  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.525548  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:15:58.525554  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:15:58.525626  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:15:58.559011  604817 cri.go:89] found id: ""
	I0127 14:15:58.559039  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.559051  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:15:58.559059  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:15:58.559130  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:15:58.591829  604817 cri.go:89] found id: ""
	I0127 14:15:58.591856  604817 logs.go:282] 0 containers: []
	W0127 14:15:58.591868  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:15:58.591883  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:15:58.591898  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:15:58.658241  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:15:58.658273  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:15:58.658286  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:15:58.739022  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:15:58.739051  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:15:58.778170  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:15:58.778200  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:15:58.829860  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:15:58.829884  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:01.342818  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:01.358185  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:01.358263  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:01.393765  604817 cri.go:89] found id: ""
	I0127 14:16:01.393797  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.393809  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:01.393817  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:01.393879  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:01.426999  604817 cri.go:89] found id: ""
	I0127 14:16:01.427025  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.427035  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:01.427042  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:01.427113  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:01.459255  604817 cri.go:89] found id: ""
	I0127 14:16:01.459279  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.459289  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:01.459296  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:01.459374  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:01.491407  604817 cri.go:89] found id: ""
	I0127 14:16:01.491434  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.491445  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:01.491457  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:01.491519  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:01.522788  604817 cri.go:89] found id: ""
	I0127 14:16:01.522814  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.522829  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:01.522837  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:01.522892  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:01.561264  604817 cri.go:89] found id: ""
	I0127 14:16:01.561287  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.561297  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:01.561305  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:01.561360  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:01.597172  604817 cri.go:89] found id: ""
	I0127 14:16:01.597194  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.597203  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:01.597211  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:01.597270  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:01.630347  604817 cri.go:89] found id: ""
	I0127 14:16:01.630377  604817 logs.go:282] 0 containers: []
	W0127 14:16:01.630384  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:01.630395  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:01.630408  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:01.680926  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:01.680955  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:01.693979  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:01.694001  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:01.762872  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:01.762889  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:01.762901  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:01.843665  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:01.843689  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:04.383357  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:04.396578  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:04.396650  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:04.438973  604817 cri.go:89] found id: ""
	I0127 14:16:04.438999  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.439007  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:04.439016  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:04.439071  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:04.476240  604817 cri.go:89] found id: ""
	I0127 14:16:04.476263  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.476270  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:04.476277  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:04.476330  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:04.507879  604817 cri.go:89] found id: ""
	I0127 14:16:04.507899  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.507906  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:04.507912  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:04.507954  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:04.538471  604817 cri.go:89] found id: ""
	I0127 14:16:04.538504  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.538515  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:04.538522  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:04.538580  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:04.570523  604817 cri.go:89] found id: ""
	I0127 14:16:04.570553  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.570563  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:04.570571  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:04.570628  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:04.601488  604817 cri.go:89] found id: ""
	I0127 14:16:04.601512  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.601521  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:04.601529  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:04.601603  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:04.632734  604817 cri.go:89] found id: ""
	I0127 14:16:04.632760  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.632770  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:04.632776  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:04.632818  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:04.666473  604817 cri.go:89] found id: ""
	I0127 14:16:04.666492  604817 logs.go:282] 0 containers: []
	W0127 14:16:04.666507  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:04.666518  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:04.666530  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:04.746145  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:04.746174  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:04.782040  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:04.782070  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:04.831037  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:04.831063  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:04.844029  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:04.844053  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:04.917009  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:07.417433  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:07.431041  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:07.431105  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:07.465371  604817 cri.go:89] found id: ""
	I0127 14:16:07.465402  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.465413  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:07.465421  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:07.465477  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:07.506404  604817 cri.go:89] found id: ""
	I0127 14:16:07.506427  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.506434  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:07.506440  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:07.506490  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:07.539334  604817 cri.go:89] found id: ""
	I0127 14:16:07.539367  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.539377  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:07.539387  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:07.539446  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:07.576296  604817 cri.go:89] found id: ""
	I0127 14:16:07.576333  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.576345  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:07.576371  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:07.576422  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:07.613729  604817 cri.go:89] found id: ""
	I0127 14:16:07.613756  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.613767  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:07.613775  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:07.613837  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:07.646494  604817 cri.go:89] found id: ""
	I0127 14:16:07.646521  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.646531  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:07.646538  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:07.646593  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:07.679030  604817 cri.go:89] found id: ""
	I0127 14:16:07.679061  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.679071  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:07.679079  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:07.679152  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:07.711966  604817 cri.go:89] found id: ""
	I0127 14:16:07.711993  604817 logs.go:282] 0 containers: []
	W0127 14:16:07.712004  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:07.712017  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:07.712032  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:07.780949  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:07.780968  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:07.780983  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:07.868795  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:07.868825  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:07.909700  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:07.909729  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:07.962621  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:07.962645  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:10.477834  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:10.491683  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:10.491753  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:10.526028  604817 cri.go:89] found id: ""
	I0127 14:16:10.526053  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.526064  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:10.526073  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:10.526130  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:10.558443  604817 cri.go:89] found id: ""
	I0127 14:16:10.558473  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.558485  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:10.558493  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:10.558533  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:10.593633  604817 cri.go:89] found id: ""
	I0127 14:16:10.593655  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.593663  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:10.593668  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:10.593719  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:10.624182  604817 cri.go:89] found id: ""
	I0127 14:16:10.624207  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.624217  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:10.624225  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:10.624283  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:10.658619  604817 cri.go:89] found id: ""
	I0127 14:16:10.658646  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.658655  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:10.658664  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:10.658721  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:10.690059  604817 cri.go:89] found id: ""
	I0127 14:16:10.690088  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.690099  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:10.690112  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:10.690179  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:10.720744  604817 cri.go:89] found id: ""
	I0127 14:16:10.720770  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.720781  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:10.720788  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:10.720841  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:10.753480  604817 cri.go:89] found id: ""
	I0127 14:16:10.753506  604817 logs.go:282] 0 containers: []
	W0127 14:16:10.753518  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:10.753529  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:10.753540  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:10.817786  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:10.817813  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:10.817827  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:10.897486  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:10.897510  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:10.934300  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:10.934334  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:10.980460  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:10.980483  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:13.493714  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:13.507572  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:13.507631  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:13.549001  604817 cri.go:89] found id: ""
	I0127 14:16:13.549031  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.549044  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:13.549055  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:13.549131  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:13.581844  604817 cri.go:89] found id: ""
	I0127 14:16:13.581872  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.581880  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:13.581886  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:13.581932  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:13.619701  604817 cri.go:89] found id: ""
	I0127 14:16:13.619735  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.619746  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:13.619754  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:13.619821  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:13.657076  604817 cri.go:89] found id: ""
	I0127 14:16:13.657102  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.657110  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:13.657116  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:13.657170  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:13.690975  604817 cri.go:89] found id: ""
	I0127 14:16:13.691002  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.691009  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:13.691015  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:13.691064  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:13.724784  604817 cri.go:89] found id: ""
	I0127 14:16:13.724815  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.724823  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:13.724829  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:13.724885  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:13.765921  604817 cri.go:89] found id: ""
	I0127 14:16:13.765942  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.765948  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:13.765954  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:13.765997  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:13.800713  604817 cri.go:89] found id: ""
	I0127 14:16:13.800732  604817 logs.go:282] 0 containers: []
	W0127 14:16:13.800739  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:13.800749  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:13.800760  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:13.850546  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:13.850572  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:13.865143  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:13.865171  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:13.953661  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:13.953685  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:13.953700  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:14.038345  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:14.038380  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:16.583951  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:16.598704  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:16.598787  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:16.631760  604817 cri.go:89] found id: ""
	I0127 14:16:16.631787  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.631795  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:16.631801  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:16.631852  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:16.664552  604817 cri.go:89] found id: ""
	I0127 14:16:16.664587  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.664598  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:16.664607  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:16.664673  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:16.704705  604817 cri.go:89] found id: ""
	I0127 14:16:16.704735  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.704745  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:16.704753  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:16.704809  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:16.739419  604817 cri.go:89] found id: ""
	I0127 14:16:16.739452  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.739464  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:16.739473  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:16.739538  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:16.770376  604817 cri.go:89] found id: ""
	I0127 14:16:16.770396  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.770403  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:16.770409  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:16.770458  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:16.801290  604817 cri.go:89] found id: ""
	I0127 14:16:16.801315  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.801322  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:16.801327  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:16.801374  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:16.841344  604817 cri.go:89] found id: ""
	I0127 14:16:16.841371  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.841381  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:16.841389  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:16.841447  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:16.874308  604817 cri.go:89] found id: ""
	I0127 14:16:16.874334  604817 logs.go:282] 0 containers: []
	W0127 14:16:16.874343  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:16.874357  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:16.874373  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:16.911334  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:16.911408  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:16.962895  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:16.962924  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:16.976599  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:16.976624  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:17.056095  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:17.056121  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:17.056144  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:19.637721  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:19.654929  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:19.655019  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:19.694651  604817 cri.go:89] found id: ""
	I0127 14:16:19.694687  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.694700  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:19.694709  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:19.694781  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:19.734304  604817 cri.go:89] found id: ""
	I0127 14:16:19.734346  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.734355  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:19.734367  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:19.734437  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:19.768405  604817 cri.go:89] found id: ""
	I0127 14:16:19.768438  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.768449  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:19.768458  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:19.768527  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:19.807010  604817 cri.go:89] found id: ""
	I0127 14:16:19.807041  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.807052  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:19.807066  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:19.807138  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:19.849881  604817 cri.go:89] found id: ""
	I0127 14:16:19.849909  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.849917  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:19.849924  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:19.849986  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:19.887904  604817 cri.go:89] found id: ""
	I0127 14:16:19.887942  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.887954  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:19.887973  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:19.888038  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:19.941201  604817 cri.go:89] found id: ""
	I0127 14:16:19.941246  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.941257  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:19.941266  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:19.941333  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:19.974204  604817 cri.go:89] found id: ""
	I0127 14:16:19.974237  604817 logs.go:282] 0 containers: []
	W0127 14:16:19.974249  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:19.974262  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:19.974281  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:20.026958  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:20.026995  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:20.040207  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:20.040235  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:20.120118  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:20.120149  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:20.120166  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:20.217053  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:20.217110  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:22.762742  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:22.777735  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:22.777806  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:22.815584  604817 cri.go:89] found id: ""
	I0127 14:16:22.815619  604817 logs.go:282] 0 containers: []
	W0127 14:16:22.815631  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:22.815639  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:22.815706  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:22.857197  604817 cri.go:89] found id: ""
	I0127 14:16:22.857235  604817 logs.go:282] 0 containers: []
	W0127 14:16:22.857243  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:22.857249  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:22.857318  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:22.895035  604817 cri.go:89] found id: ""
	I0127 14:16:22.895061  604817 logs.go:282] 0 containers: []
	W0127 14:16:22.895070  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:22.895076  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:22.895133  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:22.927522  604817 cri.go:89] found id: ""
	I0127 14:16:22.927560  604817 logs.go:282] 0 containers: []
	W0127 14:16:22.927572  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:22.927584  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:22.927658  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:22.962043  604817 cri.go:89] found id: ""
	I0127 14:16:22.962067  604817 logs.go:282] 0 containers: []
	W0127 14:16:22.962077  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:22.962085  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:22.962145  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:23.000463  604817 cri.go:89] found id: ""
	I0127 14:16:23.000489  604817 logs.go:282] 0 containers: []
	W0127 14:16:23.000500  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:23.000507  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:23.000558  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:23.039585  604817 cri.go:89] found id: ""
	I0127 14:16:23.039614  604817 logs.go:282] 0 containers: []
	W0127 14:16:23.039624  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:23.039632  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:23.039693  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:23.084705  604817 cri.go:89] found id: ""
	I0127 14:16:23.084737  604817 logs.go:282] 0 containers: []
	W0127 14:16:23.084748  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:23.084762  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:23.084774  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:23.130139  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:23.130175  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:23.192889  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:23.192924  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:23.207972  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:23.208000  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:23.283351  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:23.283386  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:23.283402  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:25.888342  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:25.903402  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:25.903477  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:25.941009  604817 cri.go:89] found id: ""
	I0127 14:16:25.941031  604817 logs.go:282] 0 containers: []
	W0127 14:16:25.941038  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:25.941044  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:25.941101  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:25.976224  604817 cri.go:89] found id: ""
	I0127 14:16:25.976250  604817 logs.go:282] 0 containers: []
	W0127 14:16:25.976260  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:25.976268  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:25.976321  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:26.013852  604817 cri.go:89] found id: ""
	I0127 14:16:26.013876  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.013886  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:26.013894  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:26.013949  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:26.052519  604817 cri.go:89] found id: ""
	I0127 14:16:26.052544  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.052552  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:26.052558  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:26.052616  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:26.085239  604817 cri.go:89] found id: ""
	I0127 14:16:26.085275  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.085296  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:26.085304  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:26.085360  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:26.119061  604817 cri.go:89] found id: ""
	I0127 14:16:26.119086  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.119096  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:26.119104  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:26.119169  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:26.153498  604817 cri.go:89] found id: ""
	I0127 14:16:26.153526  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.153535  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:26.153543  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:26.153621  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:26.187871  604817 cri.go:89] found id: ""
	I0127 14:16:26.187900  604817 logs.go:282] 0 containers: []
	W0127 14:16:26.187910  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:26.187923  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:26.187939  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:26.263146  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:26.263182  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:26.263199  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:26.359834  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:26.359875  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:26.439856  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:26.439896  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:26.503986  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:26.504023  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:29.018863  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:29.033411  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:29.033494  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:29.077038  604817 cri.go:89] found id: ""
	I0127 14:16:29.077079  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.077090  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:29.077099  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:29.077167  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:29.118022  604817 cri.go:89] found id: ""
	I0127 14:16:29.118054  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.118066  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:29.118074  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:29.118136  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:29.161960  604817 cri.go:89] found id: ""
	I0127 14:16:29.161991  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.162002  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:29.162010  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:29.162075  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:29.212682  604817 cri.go:89] found id: ""
	I0127 14:16:29.212719  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.212731  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:29.212739  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:29.212811  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:29.249570  604817 cri.go:89] found id: ""
	I0127 14:16:29.249635  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.249646  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:29.249652  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:29.249715  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:29.294592  604817 cri.go:89] found id: ""
	I0127 14:16:29.294621  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.294631  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:29.294640  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:29.294699  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:29.332797  604817 cri.go:89] found id: ""
	I0127 14:16:29.332825  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.332834  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:29.332840  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:29.332906  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:29.381559  604817 cri.go:89] found id: ""
	I0127 14:16:29.381606  604817 logs.go:282] 0 containers: []
	W0127 14:16:29.381619  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:29.381635  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:29.381652  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:29.448163  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:29.448195  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:29.464807  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:29.464843  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:29.549459  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:29.549494  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:29.549512  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:29.627483  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:29.627527  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:32.174807  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:32.191619  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:32.191725  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:32.237557  604817 cri.go:89] found id: ""
	I0127 14:16:32.237608  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.237621  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:32.237630  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:32.237702  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:32.289281  604817 cri.go:89] found id: ""
	I0127 14:16:32.289315  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.289326  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:32.289334  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:32.289413  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:32.332613  604817 cri.go:89] found id: ""
	I0127 14:16:32.332647  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.332672  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:32.332692  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:32.332761  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:32.380505  604817 cri.go:89] found id: ""
	I0127 14:16:32.380536  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.380546  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:32.380555  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:32.380623  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:32.429758  604817 cri.go:89] found id: ""
	I0127 14:16:32.429791  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.429803  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:32.429811  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:32.429873  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:32.471887  604817 cri.go:89] found id: ""
	I0127 14:16:32.471919  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.471930  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:32.471939  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:32.472003  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:32.519113  604817 cri.go:89] found id: ""
	I0127 14:16:32.519146  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.519158  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:32.519167  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:32.519231  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:32.559305  604817 cri.go:89] found id: ""
	I0127 14:16:32.559336  604817 logs.go:282] 0 containers: []
	W0127 14:16:32.559347  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:32.559360  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:32.559380  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:32.617260  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:32.617301  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:32.633545  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:32.633574  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:32.715066  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:32.715102  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:32.715128  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:32.820317  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:32.820353  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:35.366203  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:35.382860  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:35.382940  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:35.418368  604817 cri.go:89] found id: ""
	I0127 14:16:35.418402  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.418414  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:35.418423  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:35.418491  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:35.461051  604817 cri.go:89] found id: ""
	I0127 14:16:35.461087  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.461098  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:35.461105  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:35.461181  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:35.510248  604817 cri.go:89] found id: ""
	I0127 14:16:35.510280  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.510291  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:35.510299  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:35.510368  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:35.545870  604817 cri.go:89] found id: ""
	I0127 14:16:35.545898  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.545905  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:35.545912  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:35.545983  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:35.595342  604817 cri.go:89] found id: ""
	I0127 14:16:35.595378  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.595389  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:35.595397  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:35.595473  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:35.644547  604817 cri.go:89] found id: ""
	I0127 14:16:35.644572  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.644582  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:35.644590  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:35.644655  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:35.686800  604817 cri.go:89] found id: ""
	I0127 14:16:35.686831  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.686839  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:35.686845  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:35.686913  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:35.723045  604817 cri.go:89] found id: ""
	I0127 14:16:35.723076  604817 logs.go:282] 0 containers: []
	W0127 14:16:35.723088  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:35.723102  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:35.723116  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:35.736217  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:35.736247  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:35.809876  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:35.809901  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:35.809918  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:35.905509  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:35.905549  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:35.949005  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:35.949041  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:38.506893  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:38.521111  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:38.521206  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:38.565630  604817 cri.go:89] found id: ""
	I0127 14:16:38.565666  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.565677  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:38.565686  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:38.565753  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:38.602720  604817 cri.go:89] found id: ""
	I0127 14:16:38.602751  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.602760  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:38.602769  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:38.602834  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:38.639968  604817 cri.go:89] found id: ""
	I0127 14:16:38.640001  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.640014  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:38.640025  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:38.640091  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:38.684498  604817 cri.go:89] found id: ""
	I0127 14:16:38.684533  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.684546  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:38.684556  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:38.684624  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:38.724219  604817 cri.go:89] found id: ""
	I0127 14:16:38.724250  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.724263  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:38.724272  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:38.724340  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:38.763277  604817 cri.go:89] found id: ""
	I0127 14:16:38.763311  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.763322  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:38.763331  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:38.763409  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:38.805417  604817 cri.go:89] found id: ""
	I0127 14:16:38.805447  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.805457  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:38.805465  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:38.805534  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:38.843012  604817 cri.go:89] found id: ""
	I0127 14:16:38.843041  604817 logs.go:282] 0 containers: []
	W0127 14:16:38.843051  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:38.843063  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:38.843079  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:38.913160  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:38.913198  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:38.928890  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:38.928923  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:39.003961  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:39.003992  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:39.004009  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:39.091966  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:39.091998  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:41.632912  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:41.648864  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:41.648937  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:41.687549  604817 cri.go:89] found id: ""
	I0127 14:16:41.687579  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.687590  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:41.687597  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:41.687646  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:41.721043  604817 cri.go:89] found id: ""
	I0127 14:16:41.721071  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.721079  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:41.721084  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:41.721134  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:41.760055  604817 cri.go:89] found id: ""
	I0127 14:16:41.760102  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.760113  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:41.760121  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:41.760184  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:41.792559  604817 cri.go:89] found id: ""
	I0127 14:16:41.792585  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.792596  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:41.792604  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:41.792667  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:41.829234  604817 cri.go:89] found id: ""
	I0127 14:16:41.829265  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.829273  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:41.829279  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:41.829336  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:41.864589  604817 cri.go:89] found id: ""
	I0127 14:16:41.864620  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.864631  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:41.864639  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:41.864705  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:41.899760  604817 cri.go:89] found id: ""
	I0127 14:16:41.899786  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.899794  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:41.899799  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:41.899861  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:41.931834  604817 cri.go:89] found id: ""
	I0127 14:16:41.931860  604817 logs.go:282] 0 containers: []
	W0127 14:16:41.931871  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:41.931885  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:41.931904  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:42.011304  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:42.011333  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:42.053670  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:42.053703  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:42.107851  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:42.107880  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:42.121337  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:42.121366  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:42.189076  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:44.690583  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:44.705594  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:44.705658  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:44.739756  604817 cri.go:89] found id: ""
	I0127 14:16:44.739791  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.739803  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:44.739813  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:44.739877  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:44.774800  604817 cri.go:89] found id: ""
	I0127 14:16:44.774828  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.774839  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:44.774846  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:44.774906  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:44.807649  604817 cri.go:89] found id: ""
	I0127 14:16:44.807682  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.807695  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:44.807707  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:44.807772  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:44.845431  604817 cri.go:89] found id: ""
	I0127 14:16:44.845457  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.845468  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:44.845477  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:44.845540  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:44.881563  604817 cri.go:89] found id: ""
	I0127 14:16:44.881593  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.881600  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:44.881605  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:44.881652  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:44.914677  604817 cri.go:89] found id: ""
	I0127 14:16:44.914698  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.914705  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:44.914710  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:44.914752  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:44.950784  604817 cri.go:89] found id: ""
	I0127 14:16:44.950808  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.950817  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:44.950824  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:44.950877  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:44.984213  604817 cri.go:89] found id: ""
	I0127 14:16:44.984235  604817 logs.go:282] 0 containers: []
	W0127 14:16:44.984245  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:44.984257  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:44.984271  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:45.038044  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:45.038119  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:45.050762  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:45.050784  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:45.127156  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:45.127183  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:45.127196  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:45.210601  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:45.210636  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:47.749667  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:47.763715  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:47.763789  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:47.801244  604817 cri.go:89] found id: ""
	I0127 14:16:47.801271  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.801317  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:47.801330  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:47.801402  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:47.833788  604817 cri.go:89] found id: ""
	I0127 14:16:47.833810  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.833817  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:47.833825  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:47.833890  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:47.868588  604817 cri.go:89] found id: ""
	I0127 14:16:47.868610  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.868620  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:47.868628  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:47.868683  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:47.899202  604817 cri.go:89] found id: ""
	I0127 14:16:47.899229  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.899238  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:47.899246  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:47.899297  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:47.932518  604817 cri.go:89] found id: ""
	I0127 14:16:47.932545  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.932555  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:47.932564  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:47.932627  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:47.964541  604817 cri.go:89] found id: ""
	I0127 14:16:47.964565  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.964574  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:47.964581  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:47.964636  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:47.995176  604817 cri.go:89] found id: ""
	I0127 14:16:47.995203  604817 logs.go:282] 0 containers: []
	W0127 14:16:47.995213  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:47.995221  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:47.995267  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:48.025831  604817 cri.go:89] found id: ""
	I0127 14:16:48.025857  604817 logs.go:282] 0 containers: []
	W0127 14:16:48.025867  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:48.025880  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:48.025892  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:48.077371  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:48.077396  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:48.090095  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:48.090116  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:48.160212  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:48.160239  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:48.160254  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:48.241101  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:48.241137  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:50.780015  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:50.793266  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:50.793314  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:50.826712  604817 cri.go:89] found id: ""
	I0127 14:16:50.826732  604817 logs.go:282] 0 containers: []
	W0127 14:16:50.826738  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:50.826744  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:50.826782  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:50.866917  604817 cri.go:89] found id: ""
	I0127 14:16:50.866948  604817 logs.go:282] 0 containers: []
	W0127 14:16:50.866958  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:50.866966  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:50.867019  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:50.910029  604817 cri.go:89] found id: ""
	I0127 14:16:50.910060  604817 logs.go:282] 0 containers: []
	W0127 14:16:50.910070  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:50.910078  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:50.910142  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:50.947648  604817 cri.go:89] found id: ""
	I0127 14:16:50.947681  604817 logs.go:282] 0 containers: []
	W0127 14:16:50.947691  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:50.947699  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:50.947760  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:50.988436  604817 cri.go:89] found id: ""
	I0127 14:16:50.988466  604817 logs.go:282] 0 containers: []
	W0127 14:16:50.988477  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:50.988485  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:50.988548  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:51.028842  604817 cri.go:89] found id: ""
	I0127 14:16:51.028876  604817 logs.go:282] 0 containers: []
	W0127 14:16:51.028887  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:51.028894  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:51.028958  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:51.066367  604817 cri.go:89] found id: ""
	I0127 14:16:51.066398  604817 logs.go:282] 0 containers: []
	W0127 14:16:51.066408  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:51.066416  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:51.066470  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:51.104809  604817 cri.go:89] found id: ""
	I0127 14:16:51.104838  604817 logs.go:282] 0 containers: []
	W0127 14:16:51.104849  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:51.104863  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:51.104877  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:51.119378  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:51.119420  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:51.191116  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:51.191139  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:51.191155  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:51.285485  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:51.285525  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:51.335621  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:51.335661  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:53.901697  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:53.916741  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:53.916801  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:53.956696  604817 cri.go:89] found id: ""
	I0127 14:16:53.956724  604817 logs.go:282] 0 containers: []
	W0127 14:16:53.956735  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:53.956748  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:53.956800  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:53.996658  604817 cri.go:89] found id: ""
	I0127 14:16:53.996688  604817 logs.go:282] 0 containers: []
	W0127 14:16:53.996700  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:53.996707  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:53.996770  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:54.032978  604817 cri.go:89] found id: ""
	I0127 14:16:54.033007  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.033018  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:54.033026  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:54.033095  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:54.067823  604817 cri.go:89] found id: ""
	I0127 14:16:54.067851  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.067861  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:54.067870  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:54.067931  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:54.102784  604817 cri.go:89] found id: ""
	I0127 14:16:54.102811  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.102822  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:54.102830  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:54.102894  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:54.135013  604817 cri.go:89] found id: ""
	I0127 14:16:54.135040  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.135053  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:54.135061  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:54.135113  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:54.167942  604817 cri.go:89] found id: ""
	I0127 14:16:54.167978  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.167988  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:54.167997  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:54.168052  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:54.198704  604817 cri.go:89] found id: ""
	I0127 14:16:54.198727  604817 logs.go:282] 0 containers: []
	W0127 14:16:54.198735  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:54.198745  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:54.198756  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:54.232510  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:54.232537  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:54.282649  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:54.282685  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:16:54.296668  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:54.296699  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:54.370579  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:54.370603  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:54.370622  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:56.953050  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:16:56.968466  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:16:56.968546  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:16:57.003569  604817 cri.go:89] found id: ""
	I0127 14:16:57.003599  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.003610  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:16:57.003618  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:16:57.003685  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:16:57.036039  604817 cri.go:89] found id: ""
	I0127 14:16:57.036068  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.036080  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:16:57.036087  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:16:57.036154  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:16:57.070824  604817 cri.go:89] found id: ""
	I0127 14:16:57.070851  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.070860  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:16:57.070866  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:16:57.070921  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:16:57.108186  604817 cri.go:89] found id: ""
	I0127 14:16:57.108213  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.108223  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:16:57.108230  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:16:57.108294  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:16:57.142448  604817 cri.go:89] found id: ""
	I0127 14:16:57.142478  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.142489  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:16:57.142496  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:16:57.142557  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:16:57.177686  604817 cri.go:89] found id: ""
	I0127 14:16:57.177715  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.177725  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:16:57.177734  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:16:57.177799  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:16:57.209717  604817 cri.go:89] found id: ""
	I0127 14:16:57.209744  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.209751  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:16:57.209758  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:16:57.209818  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:16:57.248078  604817 cri.go:89] found id: ""
	I0127 14:16:57.248110  604817 logs.go:282] 0 containers: []
	W0127 14:16:57.248125  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:16:57.248138  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:16:57.248152  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:16:57.323880  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:16:57.323908  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:16:57.323927  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:16:57.424500  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:16:57.424551  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:16:57.477256  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:16:57.477286  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:16:57.528881  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:16:57.528909  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:00.044124  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:00.062042  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:00.062120  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:00.098217  604817 cri.go:89] found id: ""
	I0127 14:17:00.098245  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.098256  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:00.098264  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:00.098329  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:00.129293  604817 cri.go:89] found id: ""
	I0127 14:17:00.129320  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.129331  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:00.129340  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:00.129392  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:00.161480  604817 cri.go:89] found id: ""
	I0127 14:17:00.161512  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.161520  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:00.161526  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:00.161602  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:00.194231  604817 cri.go:89] found id: ""
	I0127 14:17:00.194261  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.194269  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:00.194276  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:00.194337  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:00.229187  604817 cri.go:89] found id: ""
	I0127 14:17:00.229216  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.229225  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:00.229232  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:00.229292  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:00.262432  604817 cri.go:89] found id: ""
	I0127 14:17:00.262468  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.262480  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:00.262490  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:00.262553  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:00.301294  604817 cri.go:89] found id: ""
	I0127 14:17:00.301321  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.301330  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:00.301337  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:00.301401  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:00.339864  604817 cri.go:89] found id: ""
	I0127 14:17:00.339897  604817 logs.go:282] 0 containers: []
	W0127 14:17:00.339908  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:00.339922  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:00.339937  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:00.353837  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:00.353875  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:00.434020  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:00.434053  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:00.434071  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:00.519821  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:00.519860  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:00.564458  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:00.564500  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:03.115079  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:03.128620  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:03.128696  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:03.168120  604817 cri.go:89] found id: ""
	I0127 14:17:03.168150  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.168161  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:03.168169  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:03.168232  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:03.205059  604817 cri.go:89] found id: ""
	I0127 14:17:03.205096  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.205107  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:03.205115  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:03.205182  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:03.247406  604817 cri.go:89] found id: ""
	I0127 14:17:03.247435  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.247446  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:03.247455  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:03.247519  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:03.291263  604817 cri.go:89] found id: ""
	I0127 14:17:03.291304  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.291315  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:03.291323  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:03.291403  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:03.333862  604817 cri.go:89] found id: ""
	I0127 14:17:03.333896  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.333907  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:03.333915  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:03.333984  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:03.368434  604817 cri.go:89] found id: ""
	I0127 14:17:03.368468  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.368480  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:03.368489  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:03.368552  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:03.410743  604817 cri.go:89] found id: ""
	I0127 14:17:03.410766  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.410773  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:03.410780  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:03.410845  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:03.454843  604817 cri.go:89] found id: ""
	I0127 14:17:03.454880  604817 logs.go:282] 0 containers: []
	W0127 14:17:03.454890  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:03.454904  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:03.454921  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:03.533978  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:03.534012  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:03.579902  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:03.580009  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:03.630872  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:03.630903  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:03.648594  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:03.648630  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:03.720991  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:06.221342  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:06.237046  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:06.237123  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:06.271409  604817 cri.go:89] found id: ""
	I0127 14:17:06.271435  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.271445  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:06.271452  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:06.271515  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:06.308517  604817 cri.go:89] found id: ""
	I0127 14:17:06.308546  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.308556  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:06.308563  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:06.308627  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:06.341545  604817 cri.go:89] found id: ""
	I0127 14:17:06.341573  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.341600  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:06.341609  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:06.341671  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:06.376860  604817 cri.go:89] found id: ""
	I0127 14:17:06.376884  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.376894  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:06.376936  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:06.376998  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:06.408786  604817 cri.go:89] found id: ""
	I0127 14:17:06.408805  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.408811  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:06.408816  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:06.408864  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:06.446815  604817 cri.go:89] found id: ""
	I0127 14:17:06.446841  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.446850  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:06.446857  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:06.446912  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:06.484252  604817 cri.go:89] found id: ""
	I0127 14:17:06.484284  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.484297  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:06.484305  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:06.484378  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:06.523069  604817 cri.go:89] found id: ""
	I0127 14:17:06.523097  604817 logs.go:282] 0 containers: []
	W0127 14:17:06.523107  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:06.523126  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:06.523142  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:06.591403  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:06.591424  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:06.591443  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:06.672462  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:06.672492  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:06.707770  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:06.707798  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:06.761233  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:06.761262  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:09.276259  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:09.289110  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:09.289181  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:09.326327  604817 cri.go:89] found id: ""
	I0127 14:17:09.326350  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.326357  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:09.326363  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:09.326421  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:09.360823  604817 cri.go:89] found id: ""
	I0127 14:17:09.360845  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.360852  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:09.360857  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:09.360909  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:09.393828  604817 cri.go:89] found id: ""
	I0127 14:17:09.393851  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.393858  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:09.393863  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:09.393904  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:09.428087  604817 cri.go:89] found id: ""
	I0127 14:17:09.428113  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.428120  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:09.428126  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:09.428181  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:09.461214  604817 cri.go:89] found id: ""
	I0127 14:17:09.461235  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.461244  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:09.461251  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:09.461310  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:09.494078  604817 cri.go:89] found id: ""
	I0127 14:17:09.494105  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.494115  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:09.494123  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:09.494191  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:09.525185  604817 cri.go:89] found id: ""
	I0127 14:17:09.525205  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.525211  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:09.525217  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:09.525257  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:09.556708  604817 cri.go:89] found id: ""
	I0127 14:17:09.556732  604817 logs.go:282] 0 containers: []
	W0127 14:17:09.556742  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:09.556754  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:09.556767  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:09.632519  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:09.632552  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:09.632572  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:09.711728  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:09.711757  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:09.752684  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:09.752712  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:09.804765  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:09.804794  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:12.319654  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:12.332849  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:12.332918  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:12.369477  604817 cri.go:89] found id: ""
	I0127 14:17:12.369503  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.369515  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:12.369523  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:12.369575  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:12.403338  604817 cri.go:89] found id: ""
	I0127 14:17:12.403373  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.403383  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:12.403391  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:12.403454  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:12.436127  604817 cri.go:89] found id: ""
	I0127 14:17:12.436150  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.436161  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:12.436169  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:12.436221  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:12.467693  604817 cri.go:89] found id: ""
	I0127 14:17:12.467715  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.467722  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:12.467728  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:12.467785  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:12.499762  604817 cri.go:89] found id: ""
	I0127 14:17:12.499784  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.499791  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:12.499796  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:12.499837  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:12.529805  604817 cri.go:89] found id: ""
	I0127 14:17:12.529839  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.529849  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:12.529859  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:12.529918  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:12.573759  604817 cri.go:89] found id: ""
	I0127 14:17:12.573789  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.573797  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:12.573803  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:12.573861  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:12.632933  604817 cri.go:89] found id: ""
	I0127 14:17:12.632968  604817 logs.go:282] 0 containers: []
	W0127 14:17:12.632980  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:12.632992  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:12.633003  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:12.690666  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:12.690693  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:12.703679  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:12.703712  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:12.769741  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:12.769758  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:12.769770  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:12.850355  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:12.850380  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:15.387770  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:15.400289  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:15.400346  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:15.439121  604817 cri.go:89] found id: ""
	I0127 14:17:15.439148  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.439163  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:15.439173  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:15.439237  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:15.475447  604817 cri.go:89] found id: ""
	I0127 14:17:15.475470  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.475477  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:15.475483  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:15.475525  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:15.508372  604817 cri.go:89] found id: ""
	I0127 14:17:15.508391  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.508398  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:15.508403  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:15.508449  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:15.540806  604817 cri.go:89] found id: ""
	I0127 14:17:15.540837  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.540849  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:15.540859  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:15.540924  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:15.574456  604817 cri.go:89] found id: ""
	I0127 14:17:15.574485  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.574497  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:15.574505  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:15.574563  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:15.607407  604817 cri.go:89] found id: ""
	I0127 14:17:15.607434  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.607445  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:15.607453  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:15.607498  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:15.640319  604817 cri.go:89] found id: ""
	I0127 14:17:15.640346  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.640354  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:15.640360  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:15.640416  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:15.672265  604817 cri.go:89] found id: ""
	I0127 14:17:15.672288  604817 logs.go:282] 0 containers: []
	W0127 14:17:15.672295  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:15.672304  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:15.672315  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:15.725677  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:15.725702  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:15.738352  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:15.738376  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:15.816562  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:15.816587  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:15.816602  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:15.895516  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:15.895544  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:18.437986  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:18.451759  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:18.451815  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:18.485435  604817 cri.go:89] found id: ""
	I0127 14:17:18.485458  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.485465  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:18.485470  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:18.485514  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:18.515058  604817 cri.go:89] found id: ""
	I0127 14:17:18.515084  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.515104  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:18.515110  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:18.515156  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:18.547395  604817 cri.go:89] found id: ""
	I0127 14:17:18.547415  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.547425  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:18.547431  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:18.547474  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:18.579119  604817 cri.go:89] found id: ""
	I0127 14:17:18.579145  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.579155  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:18.579162  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:18.579204  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:18.608510  604817 cri.go:89] found id: ""
	I0127 14:17:18.608531  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.608537  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:18.608543  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:18.608582  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:18.648289  604817 cri.go:89] found id: ""
	I0127 14:17:18.648315  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.648323  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:18.648328  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:18.648381  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:18.684384  604817 cri.go:89] found id: ""
	I0127 14:17:18.684408  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.684419  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:18.684427  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:18.684487  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:18.717413  604817 cri.go:89] found id: ""
	I0127 14:17:18.717439  604817 logs.go:282] 0 containers: []
	W0127 14:17:18.717449  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:18.717463  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:18.717479  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:18.768094  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:18.768124  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:18.780831  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:18.780852  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:18.843389  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:18.843417  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:18.843434  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:18.918905  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:18.918928  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:21.456909  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:21.471386  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:21.471440  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:21.509526  604817 cri.go:89] found id: ""
	I0127 14:17:21.509554  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.509566  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:21.509591  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:21.509655  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:21.547273  604817 cri.go:89] found id: ""
	I0127 14:17:21.547311  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.547324  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:21.547333  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:21.547404  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:21.591668  604817 cri.go:89] found id: ""
	I0127 14:17:21.591692  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.591699  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:21.591706  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:21.591757  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:21.624512  604817 cri.go:89] found id: ""
	I0127 14:17:21.624537  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.624545  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:21.624551  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:21.624600  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:21.656827  604817 cri.go:89] found id: ""
	I0127 14:17:21.656856  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.656866  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:21.656873  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:21.656939  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:21.690712  604817 cri.go:89] found id: ""
	I0127 14:17:21.690734  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.690741  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:21.690746  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:21.690791  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:21.730666  604817 cri.go:89] found id: ""
	I0127 14:17:21.730688  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.730695  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:21.730701  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:21.730749  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:21.761329  604817 cri.go:89] found id: ""
	I0127 14:17:21.761357  604817 logs.go:282] 0 containers: []
	W0127 14:17:21.761368  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:21.761383  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:21.761402  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:21.810798  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:21.810824  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:21.824009  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:21.824034  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:21.895572  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:21.895594  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:21.895606  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:21.971451  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:21.971477  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:24.511974  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:24.536291  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:24.536380  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:24.580580  604817 cri.go:89] found id: ""
	I0127 14:17:24.580615  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.580627  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:24.580636  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:24.580711  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:24.626086  604817 cri.go:89] found id: ""
	I0127 14:17:24.626127  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.626138  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:24.626148  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:24.626222  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:24.677422  604817 cri.go:89] found id: ""
	I0127 14:17:24.677456  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.677469  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:24.677479  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:24.677554  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:24.718025  604817 cri.go:89] found id: ""
	I0127 14:17:24.718050  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.718060  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:24.718068  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:24.718145  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:24.756211  604817 cri.go:89] found id: ""
	I0127 14:17:24.756236  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.756246  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:24.756254  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:24.756314  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:24.804893  604817 cri.go:89] found id: ""
	I0127 14:17:24.804915  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.804926  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:24.804935  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:24.804993  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:24.849270  604817 cri.go:89] found id: ""
	I0127 14:17:24.849295  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.849303  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:24.849311  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:24.849363  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:24.887218  604817 cri.go:89] found id: ""
	I0127 14:17:24.887242  604817 logs.go:282] 0 containers: []
	W0127 14:17:24.887249  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:24.887260  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:24.887309  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:24.937563  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:24.937613  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:24.950904  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:24.950930  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:25.024547  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:25.024570  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:25.024586  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:25.102965  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:25.103002  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:27.649694  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:27.668626  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:27.668692  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:27.713927  604817 cri.go:89] found id: ""
	I0127 14:17:27.713959  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.713971  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:27.713980  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:27.714049  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:27.754634  604817 cri.go:89] found id: ""
	I0127 14:17:27.754667  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.754678  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:27.754686  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:27.754767  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:27.794452  604817 cri.go:89] found id: ""
	I0127 14:17:27.794486  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.794498  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:27.794508  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:27.794578  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:27.841061  604817 cri.go:89] found id: ""
	I0127 14:17:27.841093  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.841104  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:27.841112  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:27.841182  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:27.883519  604817 cri.go:89] found id: ""
	I0127 14:17:27.883556  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.883570  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:27.883579  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:27.883649  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:27.922599  604817 cri.go:89] found id: ""
	I0127 14:17:27.922653  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.922667  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:27.922676  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:27.922767  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:27.966550  604817 cri.go:89] found id: ""
	I0127 14:17:27.966583  604817 logs.go:282] 0 containers: []
	W0127 14:17:27.966593  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:27.966602  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:27.966667  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:28.010623  604817 cri.go:89] found id: ""
	I0127 14:17:28.010661  604817 logs.go:282] 0 containers: []
	W0127 14:17:28.010674  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:28.010688  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:28.010707  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:28.068537  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:28.068570  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:28.082323  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:28.082350  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:28.158013  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:28.158044  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:28.158063  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:28.258147  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:28.258187  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:30.804912  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:30.818565  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:30.818651  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:30.863147  604817 cri.go:89] found id: ""
	I0127 14:17:30.863180  604817 logs.go:282] 0 containers: []
	W0127 14:17:30.863191  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:30.863199  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:30.863269  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:30.910280  604817 cri.go:89] found id: ""
	I0127 14:17:30.910315  604817 logs.go:282] 0 containers: []
	W0127 14:17:30.910328  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:30.910338  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:30.910400  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:30.956232  604817 cri.go:89] found id: ""
	I0127 14:17:30.956264  604817 logs.go:282] 0 containers: []
	W0127 14:17:30.956275  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:30.956284  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:30.956365  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:30.996759  604817 cri.go:89] found id: ""
	I0127 14:17:30.996794  604817 logs.go:282] 0 containers: []
	W0127 14:17:30.996805  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:30.996814  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:30.996884  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:31.033626  604817 cri.go:89] found id: ""
	I0127 14:17:31.033658  604817 logs.go:282] 0 containers: []
	W0127 14:17:31.033669  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:31.033677  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:31.033745  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:31.078022  604817 cri.go:89] found id: ""
	I0127 14:17:31.078047  604817 logs.go:282] 0 containers: []
	W0127 14:17:31.078056  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:31.078064  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:31.078132  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:31.111622  604817 cri.go:89] found id: ""
	I0127 14:17:31.111646  604817 logs.go:282] 0 containers: []
	W0127 14:17:31.111652  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:31.111657  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:31.111705  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:31.145311  604817 cri.go:89] found id: ""
	I0127 14:17:31.145340  604817 logs.go:282] 0 containers: []
	W0127 14:17:31.145348  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:31.145366  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:31.145382  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:31.183903  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:31.183941  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:31.234125  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:31.234151  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:31.248231  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:31.248259  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:31.327593  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:31.327624  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:31.327642  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:33.917744  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:33.932916  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:33.932994  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:33.968646  604817 cri.go:89] found id: ""
	I0127 14:17:33.968681  604817 logs.go:282] 0 containers: []
	W0127 14:17:33.968692  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:33.968701  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:33.968769  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:34.001635  604817 cri.go:89] found id: ""
	I0127 14:17:34.001660  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.001670  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:34.001677  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:34.001730  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:34.040378  604817 cri.go:89] found id: ""
	I0127 14:17:34.040413  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.040424  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:34.040437  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:34.040514  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:34.079676  604817 cri.go:89] found id: ""
	I0127 14:17:34.079699  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.079707  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:34.079712  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:34.079770  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:34.118265  604817 cri.go:89] found id: ""
	I0127 14:17:34.118289  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.118297  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:34.118302  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:34.118354  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:34.152627  604817 cri.go:89] found id: ""
	I0127 14:17:34.152657  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.152668  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:34.152679  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:34.152758  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:34.194266  604817 cri.go:89] found id: ""
	I0127 14:17:34.194289  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.194299  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:34.194306  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:34.194376  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:34.233702  604817 cri.go:89] found id: ""
	I0127 14:17:34.233728  604817 logs.go:282] 0 containers: []
	W0127 14:17:34.233737  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:34.233747  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:34.233762  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:34.306482  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:34.306515  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:34.306540  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:34.384718  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:34.384758  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:34.431141  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:34.431181  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:34.496328  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:34.496361  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:37.010545  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:37.025879  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:17:37.025953  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:17:37.060262  604817 cri.go:89] found id: ""
	I0127 14:17:37.060292  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.060301  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:17:37.060307  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:17:37.060373  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:17:37.092788  604817 cri.go:89] found id: ""
	I0127 14:17:37.092820  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.092832  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:17:37.092841  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:17:37.092908  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:17:37.128770  604817 cri.go:89] found id: ""
	I0127 14:17:37.128799  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.128811  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:17:37.128819  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:17:37.128882  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:17:37.167755  604817 cri.go:89] found id: ""
	I0127 14:17:37.167790  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.167806  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:17:37.167814  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:17:37.167880  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:17:37.205933  604817 cri.go:89] found id: ""
	I0127 14:17:37.205967  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.205979  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:17:37.205987  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:17:37.206057  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:17:37.246940  604817 cri.go:89] found id: ""
	I0127 14:17:37.246967  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.246975  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:17:37.246982  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:17:37.247032  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:17:37.294437  604817 cri.go:89] found id: ""
	I0127 14:17:37.294469  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.294480  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:17:37.294489  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:17:37.294559  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:17:37.337949  604817 cri.go:89] found id: ""
	I0127 14:17:37.337979  604817 logs.go:282] 0 containers: []
	W0127 14:17:37.337991  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:17:37.338004  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:17:37.338022  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:17:37.416591  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:17:37.416621  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:17:37.416641  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:17:37.505067  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:17:37.505108  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 14:17:37.544968  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:17:37.545016  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:17:37.600408  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:17:37.600443  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:17:40.118776  604817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:17:40.133453  604817 kubeadm.go:597] duration metric: took 4m3.358141947s to restartPrimaryControlPlane
	W0127 14:17:40.133539  604817 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 14:17:40.133574  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:17:40.723608  604817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:17:40.739760  604817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:17:40.749993  604817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:17:40.760196  604817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:17:40.760218  604817 kubeadm.go:157] found existing configuration files:
	
	I0127 14:17:40.760262  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:17:40.769752  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:17:40.769795  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:17:40.779404  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:17:40.788955  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:17:40.789010  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:17:40.798346  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:17:40.807675  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:17:40.807724  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:17:40.817458  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:17:40.827151  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:17:40.827197  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:17:40.837560  604817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:17:40.909224  604817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:17:40.909300  604817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:17:41.055948  604817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:17:41.056122  604817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:17:41.056320  604817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:17:41.254196  604817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:17:41.255501  604817 out.go:235]   - Generating certificates and keys ...
	I0127 14:17:41.255618  604817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:17:41.255697  604817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:17:41.255849  604817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:17:41.255990  604817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:17:41.256125  604817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:17:41.256233  604817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:17:41.256326  604817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:17:41.256433  604817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:17:41.256544  604817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:17:41.256650  604817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:17:41.256711  604817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:17:41.256803  604817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:17:41.541436  604817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:17:41.817907  604817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:17:41.922927  604817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:17:42.006620  604817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:17:42.021159  604817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:17:42.022579  604817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:17:42.022652  604817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:17:42.179524  604817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:17:42.181134  604817 out.go:235]   - Booting up control plane ...
	I0127 14:17:42.181308  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:17:42.191608  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:17:42.195059  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:17:42.195712  604817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:17:42.199373  604817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:18:22.196404  604817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:18:22.196624  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:18:22.196848  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:18:27.197036  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:18:27.197321  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:18:37.197061  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:18:37.197241  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:18:57.196663  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:18:57.196914  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:19:37.196563  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:19:37.196805  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:19:37.196823  604817 kubeadm.go:310] 
	I0127 14:19:37.196876  604817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:19:37.196937  604817 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:19:37.196947  604817 kubeadm.go:310] 
	I0127 14:19:37.196991  604817 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:19:37.197037  604817 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:19:37.197184  604817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:19:37.197209  604817 kubeadm.go:310] 
	I0127 14:19:37.197355  604817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:19:37.197418  604817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:19:37.197460  604817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:19:37.197471  604817 kubeadm.go:310] 
	I0127 14:19:37.197639  604817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:19:37.197760  604817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:19:37.197770  604817 kubeadm.go:310] 
	I0127 14:19:37.197916  604817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:19:37.198059  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:19:37.198182  604817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:19:37.198298  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:19:37.198309  604817 kubeadm.go:310] 
	I0127 14:19:37.199197  604817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:19:37.199337  604817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:19:37.199443  604817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 14:19:37.199632  604817 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 14:19:37.199681  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:19:37.668519  604817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:19:37.683765  604817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:37.697989  604817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:37.698007  604817 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:37.698049  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:37.707769  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:37.707831  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:37.718041  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:37.727763  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:37.727819  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:37.738011  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:37.748442  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:37.748490  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:37.761442  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:37.774356  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:37.774399  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:37.784967  604817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:19:37.871087  604817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:19:37.871175  604817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:19:38.032414  604817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:19:38.032565  604817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:19:38.032734  604817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:19:38.273161  604817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:19:38.275762  604817 out.go:235]   - Generating certificates and keys ...
	I0127 14:19:38.275888  604817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:19:38.275984  604817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:19:38.276125  604817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:19:38.276222  604817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:19:38.276321  604817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:19:38.276396  604817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:19:38.276481  604817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:19:38.277362  604817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:19:38.279167  604817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:19:38.281048  604817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:19:38.281114  604817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:19:38.281187  604817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:19:38.445908  604817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:19:38.778696  604817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:19:38.962115  604817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:19:39.051249  604817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:19:39.083155  604817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:19:39.083281  604817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:19:39.083362  604817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:19:39.269395  604817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:19:39.270936  604817 out.go:235]   - Booting up control plane ...
	I0127 14:19:39.271056  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:19:39.280186  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:19:39.281499  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:19:39.282385  604817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:19:39.289618  604817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:20:19.292572  604817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:20:19.292661  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:19.292824  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:24.293094  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:24.293416  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:34.294153  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:34.294367  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:54.295205  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:54.295461  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:21:34.296827  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:21:34.297079  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:21:34.297109  604817 kubeadm.go:310] 
	I0127 14:21:34.297169  604817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:21:34.297220  604817 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:21:34.297231  604817 kubeadm.go:310] 
	I0127 14:21:34.297278  604817 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:21:34.297325  604817 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:21:34.297447  604817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:21:34.297468  604817 kubeadm.go:310] 
	I0127 14:21:34.297632  604817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:21:34.297677  604817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:21:34.297717  604817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:21:34.297728  604817 kubeadm.go:310] 
	I0127 14:21:34.297894  604817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:21:34.298028  604817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:21:34.298043  604817 kubeadm.go:310] 
	I0127 14:21:34.298170  604817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:21:34.298253  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:21:34.298316  604817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:21:34.298397  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:21:34.298408  604817 kubeadm.go:310] 
	I0127 14:21:34.299114  604817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:21:34.299216  604817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:21:34.299291  604817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:21:34.299356  604817 kubeadm.go:394] duration metric: took 7m57.571006925s to StartCluster
	I0127 14:21:34.299406  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:21:34.299474  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:21:34.355761  604817 cri.go:89] found id: ""
	I0127 14:21:34.355787  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.355798  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:21:34.355807  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:21:34.355871  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:21:34.395943  604817 cri.go:89] found id: ""
	I0127 14:21:34.395967  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.395977  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:21:34.395985  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:21:34.396045  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:21:34.435060  604817 cri.go:89] found id: ""
	I0127 14:21:34.435078  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.435098  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:21:34.435117  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:21:34.435190  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:21:34.471426  604817 cri.go:89] found id: ""
	I0127 14:21:34.471450  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.471461  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:21:34.471469  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:21:34.471528  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:21:34.505950  604817 cri.go:89] found id: ""
	I0127 14:21:34.505976  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.505984  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:21:34.505990  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:21:34.506043  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:21:34.539754  604817 cri.go:89] found id: ""
	I0127 14:21:34.539776  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.539784  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:21:34.539789  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:21:34.539841  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:21:34.571093  604817 cri.go:89] found id: ""
	I0127 14:21:34.571120  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.571134  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:21:34.571139  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:21:34.571186  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:21:34.608370  604817 cri.go:89] found id: ""
	I0127 14:21:34.608395  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.608404  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:21:34.608427  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:21:34.608442  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:21:34.662214  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:21:34.662239  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:21:34.675535  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:21:34.675559  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:21:34.750391  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:21:34.750415  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:21:34.750429  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:21:34.851544  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:21:34.851575  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 14:21:34.919115  604817 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 14:21:34.919173  604817 out.go:270] * 
	* 
	W0127 14:21:34.919254  604817 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:21:34.919275  604817 out.go:270] * 
	* 
	W0127 14:21:34.920116  604817 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 14:21:34.923401  604817 out.go:201] 
	W0127 14:21:34.924638  604817 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:21:34.924682  604817 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 14:21:34.924709  604817 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 14:21:34.926036  604817 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-456130 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (237.916448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-456130 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-183205                  | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:11 UTC | 27 Jan 25 14:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:11 UTC | 27 Jan 25 14:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-456130        | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-456130                              | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-456130             | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC | 27 Jan 25 14:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-456130                              | old-k8s-version-456130       | jenkins | v1.35.0 | 27 Jan 25 14:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | no-preload-183205 image list                           | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	| delete  | -p no-preload-183205                                   | no-preload-183205            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-650791 | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:16 UTC |
	|         | disable-driver-mounts-650791                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-379305 --memory=2200 --alsologtostderr   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:16 UTC | 27 Jan 25 14:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-379305             | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-379305                  | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-379305 --memory=2200 --alsologtostderr   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:17 UTC | 27 Jan 25 14:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-379305 image list                           | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| delete  | -p newest-cni-379305                                   | newest-cni-379305            | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:19 UTC |
	| start   | -p                                                     | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:19 UTC | 27 Jan 25 14:20 UTC |
	|         | default-k8s-diff-port-178758                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-178758  | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:20 UTC | 27 Jan 25 14:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-178758 | jenkins | v1.35.0 | 27 Jan 25 14:20 UTC |                     |
	|         | default-k8s-diff-port-178758                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:19:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:19:11.172148  608170 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:19:11.172250  608170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:19:11.172262  608170 out.go:358] Setting ErrFile to fd 2...
	I0127 14:19:11.172269  608170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:19:11.172450  608170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:19:11.172975  608170 out.go:352] Setting JSON to false
	I0127 14:19:11.174033  608170 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":18096,"bootTime":1737969455,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:19:11.174144  608170 start.go:139] virtualization: kvm guest
	I0127 14:19:11.175717  608170 out.go:177] * [default-k8s-diff-port-178758] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:19:11.177099  608170 notify.go:220] Checking for updates...
	I0127 14:19:11.177130  608170 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:19:11.178214  608170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:19:11.179312  608170 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:19:11.180446  608170 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:19:11.181545  608170 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:19:11.182610  608170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:19:11.183994  608170 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:19:11.184098  608170 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:19:11.184207  608170 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:19:11.184305  608170 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:19:11.219984  608170 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:19:11.221016  608170 start.go:297] selected driver: kvm2
	I0127 14:19:11.221033  608170 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:19:11.221046  608170 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:19:11.222054  608170 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:19:11.222149  608170 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:19:11.236891  608170 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:19:11.236926  608170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:19:11.237248  608170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:19:11.237292  608170 cni.go:84] Creating CNI manager for ""
	I0127 14:19:11.237352  608170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:19:11.237366  608170 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:19:11.237425  608170 start.go:340] cluster config:
	{Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:19:11.237555  608170 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:19:11.239057  608170 out.go:177] * Starting "default-k8s-diff-port-178758" primary control-plane node in "default-k8s-diff-port-178758" cluster
	I0127 14:19:11.240172  608170 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:19:11.240217  608170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:19:11.240229  608170 cache.go:56] Caching tarball of preloaded images
	I0127 14:19:11.240327  608170 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:19:11.240339  608170 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:19:11.240448  608170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/config.json ...
	I0127 14:19:11.240471  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/config.json: {Name:mk7b7c48cd1dddc94ff662d5db1ba463df757dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:11.240629  608170 start.go:360] acquireMachinesLock for default-k8s-diff-port-178758: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:19:11.240669  608170 start.go:364] duration metric: took 21.177µs to acquireMachinesLock for "default-k8s-diff-port-178758"
	I0127 14:19:11.240688  608170 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:defaul
t-k8s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:19:11.240755  608170 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:19:11.242158  608170 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 14:19:11.242325  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:11.242375  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:11.256360  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0127 14:19:11.256836  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:11.257505  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:11.257525  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:11.257869  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:11.258067  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:19:11.258225  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:11.258348  608170 start.go:159] libmachine.API.Create for "default-k8s-diff-port-178758" (driver="kvm2")
	I0127 14:19:11.258375  608170 client.go:168] LocalClient.Create starting
	I0127 14:19:11.258399  608170 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:19:11.258426  608170 main.go:141] libmachine: Decoding PEM data...
	I0127 14:19:11.258440  608170 main.go:141] libmachine: Parsing certificate...
	I0127 14:19:11.258489  608170 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:19:11.258507  608170 main.go:141] libmachine: Decoding PEM data...
	I0127 14:19:11.258525  608170 main.go:141] libmachine: Parsing certificate...
	I0127 14:19:11.258548  608170 main.go:141] libmachine: Running pre-create checks...
	I0127 14:19:11.258558  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .PreCreateCheck
	I0127 14:19:11.258921  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetConfigRaw
	I0127 14:19:11.259358  608170 main.go:141] libmachine: Creating machine...
	I0127 14:19:11.259378  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Create
	I0127 14:19:11.259518  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) creating KVM machine...
	I0127 14:19:11.259534  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) creating network...
	I0127 14:19:11.260748  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found existing default KVM network
	I0127 14:19:11.261991  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.261825  608193 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:19:11.263059  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.262986  608193 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209ff0}
	I0127 14:19:11.263076  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | created network xml: 
	I0127 14:19:11.263082  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | <network>
	I0127 14:19:11.263093  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   <name>mk-default-k8s-diff-port-178758</name>
	I0127 14:19:11.263104  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   <dns enable='no'/>
	I0127 14:19:11.263109  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   
	I0127 14:19:11.263122  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 14:19:11.263131  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |     <dhcp>
	I0127 14:19:11.263137  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 14:19:11.263142  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |     </dhcp>
	I0127 14:19:11.263146  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   </ip>
	I0127 14:19:11.263151  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG |   
	I0127 14:19:11.263155  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | </network>
	I0127 14:19:11.263161  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | 
	I0127 14:19:11.267588  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | trying to create private KVM network mk-default-k8s-diff-port-178758 192.168.50.0/24...
	I0127 14:19:11.337955  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | private KVM network mk-default-k8s-diff-port-178758 192.168.50.0/24 created
	I0127 14:19:11.337994  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758 ...
	I0127 14:19:11.338004  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.337896  608193 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:19:11.338017  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:19:11.338203  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:19:11.656792  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.656681  608193 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa...
	I0127 14:19:11.750196  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.750089  608193 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/default-k8s-diff-port-178758.rawdisk...
	I0127 14:19:11.750225  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Writing magic tar header
	I0127 14:19:11.750252  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Writing SSH key tar header
	I0127 14:19:11.750261  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:11.750203  608193 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758 ...
	I0127 14:19:11.750409  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758
	I0127 14:19:11.750446  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:19:11.750461  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758 (perms=drwx------)
	I0127 14:19:11.750483  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:19:11.750499  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:19:11.750521  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:19:11.750537  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home/jenkins
	I0127 14:19:11.750552  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | checking permissions on dir: /home
	I0127 14:19:11.750572  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:19:11.750592  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:19:11.750607  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:19:11.750623  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:19:11.750636  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:19:11.750649  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | skipping /home - not owner
	I0127 14:19:11.750665  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) creating domain...
	I0127 14:19:11.751621  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) define libvirt domain using xml: 
	I0127 14:19:11.751643  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) <domain type='kvm'>
	I0127 14:19:11.751654  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <name>default-k8s-diff-port-178758</name>
	I0127 14:19:11.751661  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <memory unit='MiB'>2200</memory>
	I0127 14:19:11.751666  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <vcpu>2</vcpu>
	I0127 14:19:11.751671  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <features>
	I0127 14:19:11.751676  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <acpi/>
	I0127 14:19:11.751685  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <apic/>
	I0127 14:19:11.751690  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <pae/>
	I0127 14:19:11.751697  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     
	I0127 14:19:11.751705  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   </features>
	I0127 14:19:11.751713  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <cpu mode='host-passthrough'>
	I0127 14:19:11.751718  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   
	I0127 14:19:11.751740  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   </cpu>
	I0127 14:19:11.751773  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <os>
	I0127 14:19:11.751796  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <type>hvm</type>
	I0127 14:19:11.751826  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <boot dev='cdrom'/>
	I0127 14:19:11.751850  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <boot dev='hd'/>
	I0127 14:19:11.751862  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <bootmenu enable='no'/>
	I0127 14:19:11.751878  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   </os>
	I0127 14:19:11.751902  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   <devices>
	I0127 14:19:11.751925  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <disk type='file' device='cdrom'>
	I0127 14:19:11.751943  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/boot2docker.iso'/>
	I0127 14:19:11.751967  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <target dev='hdc' bus='scsi'/>
	I0127 14:19:11.751980  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <readonly/>
	I0127 14:19:11.751992  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </disk>
	I0127 14:19:11.752006  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <disk type='file' device='disk'>
	I0127 14:19:11.752019  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:19:11.752044  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/default-k8s-diff-port-178758.rawdisk'/>
	I0127 14:19:11.752061  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <target dev='hda' bus='virtio'/>
	I0127 14:19:11.752073  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </disk>
	I0127 14:19:11.752085  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <interface type='network'>
	I0127 14:19:11.752098  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <source network='mk-default-k8s-diff-port-178758'/>
	I0127 14:19:11.752104  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <model type='virtio'/>
	I0127 14:19:11.752121  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </interface>
	I0127 14:19:11.752136  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <interface type='network'>
	I0127 14:19:11.752166  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <source network='default'/>
	I0127 14:19:11.752187  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <model type='virtio'/>
	I0127 14:19:11.752206  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </interface>
	I0127 14:19:11.752224  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <serial type='pty'>
	I0127 14:19:11.752237  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <target port='0'/>
	I0127 14:19:11.752248  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </serial>
	I0127 14:19:11.752257  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <console type='pty'>
	I0127 14:19:11.752268  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <target type='serial' port='0'/>
	I0127 14:19:11.752279  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </console>
	I0127 14:19:11.752288  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     <rng model='virtio'>
	I0127 14:19:11.752298  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)       <backend model='random'>/dev/random</backend>
	I0127 14:19:11.752310  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     </rng>
	I0127 14:19:11.752322  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     
	I0127 14:19:11.752330  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)     
	I0127 14:19:11.752343  608170 main.go:141] libmachine: (default-k8s-diff-port-178758)   </devices>
	I0127 14:19:11.752352  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) </domain>
	I0127 14:19:11.752363  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) 
	I0127 14:19:11.756313  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:87:68:31 in network default
	I0127 14:19:11.756860  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) starting domain...
	I0127 14:19:11.756877  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:11.756891  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) ensuring networks are active...
	I0127 14:19:11.757607  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Ensuring network default is active
	I0127 14:19:11.757916  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Ensuring network mk-default-k8s-diff-port-178758 is active
	I0127 14:19:11.758436  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) getting domain XML...
	I0127 14:19:11.759128  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) creating domain...
	I0127 14:19:12.078817  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) waiting for IP...
	I0127 14:19:12.079610  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.080014  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.080091  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:12.080019  608193 retry.go:31] will retry after 246.448065ms: waiting for domain to come up
	I0127 14:19:12.328526  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.329048  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.329100  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:12.329042  608193 retry.go:31] will retry after 273.483606ms: waiting for domain to come up
	I0127 14:19:12.604563  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.605038  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.605073  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:12.605004  608193 retry.go:31] will retry after 349.363253ms: waiting for domain to come up
	I0127 14:19:12.955483  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.955906  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:12.955928  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:12.955868  608193 retry.go:31] will retry after 515.163932ms: waiting for domain to come up
	I0127 14:19:13.472526  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:13.473143  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:13.473175  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:13.473110  608193 retry.go:31] will retry after 607.539133ms: waiting for domain to come up
	I0127 14:19:14.081794  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:14.082311  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:14.082359  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:14.082268  608193 retry.go:31] will retry after 919.674111ms: waiting for domain to come up
	I0127 14:19:15.003108  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:15.003570  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:15.003601  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:15.003541  608193 retry.go:31] will retry after 799.114996ms: waiting for domain to come up
	I0127 14:19:15.804563  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:15.804997  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:15.805022  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:15.804973  608193 retry.go:31] will retry after 1.35718555s: waiting for domain to come up
	I0127 14:19:17.163743  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:17.164158  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:17.164181  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:17.164132  608193 retry.go:31] will retry after 1.516339182s: waiting for domain to come up
	I0127 14:19:18.682669  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:18.683104  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:18.683132  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:18.683072  608193 retry.go:31] will retry after 1.799472761s: waiting for domain to come up
	I0127 14:19:20.484490  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:20.485016  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:20.485057  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:20.484976  608193 retry.go:31] will retry after 2.612387097s: waiting for domain to come up
	I0127 14:19:23.099003  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:23.099501  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:23.099525  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:23.099469  608193 retry.go:31] will retry after 2.502348465s: waiting for domain to come up
	I0127 14:19:25.605030  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:25.605435  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:25.605469  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:25.605406  608193 retry.go:31] will retry after 4.277672038s: waiting for domain to come up
	I0127 14:19:29.886635  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:29.887073  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find current IP address of domain default-k8s-diff-port-178758 in network mk-default-k8s-diff-port-178758
	I0127 14:19:29.887134  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | I0127 14:19:29.887064  608193 retry.go:31] will retry after 5.057794978s: waiting for domain to come up
	I0127 14:19:34.949510  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:34.950076  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) found domain IP: 192.168.50.187
	I0127 14:19:34.950120  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has current primary IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:34.950131  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) reserving static IP address...
	I0127 14:19:34.950484  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | unable to find host DHCP lease matching {name: "default-k8s-diff-port-178758", mac: "52:54:00:9e:12:0f", ip: "192.168.50.187"} in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.024015  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Getting to WaitForSSH function...
	I0127 14:19:35.024045  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) reserved static IP address 192.168.50.187 for domain default-k8s-diff-port-178758
	I0127 14:19:35.024058  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) waiting for SSH...
	I0127 14:19:35.026789  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.027190  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.027215  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.027352  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Using SSH client type: external
	I0127 14:19:35.027382  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa (-rw-------)
	I0127 14:19:35.027426  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:19:35.027440  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | About to run SSH command:
	I0127 14:19:35.027454  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | exit 0
	I0127 14:19:35.150078  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | SSH cmd err, output: <nil>: 
	I0127 14:19:35.150501  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) KVM machine creation complete
	I0127 14:19:35.150754  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetConfigRaw
	I0127 14:19:35.151411  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:35.151631  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:35.151816  608170 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:19:35.151835  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:19:35.153157  608170 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:19:35.153171  608170 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:19:35.153176  608170 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:19:35.153182  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.155311  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.155666  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.155702  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.155840  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.156007  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.156158  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.156289  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.156419  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:35.156623  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:35.156634  608170 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:19:35.260511  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:19:35.260529  608170 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:19:35.260540  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.263453  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.263861  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.263886  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.264068  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.264223  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.264356  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.264453  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.264598  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:35.264767  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:35.264778  608170 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:19:35.365550  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:19:35.365662  608170 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:19:35.365676  608170 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:19:35.365683  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:19:35.365878  608170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-178758"
	I0127 14:19:35.365900  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:19:35.366085  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.368118  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.368393  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.368425  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.368537  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.368661  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.368777  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.368893  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.369002  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:35.369195  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:35.369208  608170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-178758 && echo "default-k8s-diff-port-178758" | sudo tee /etc/hostname
	I0127 14:19:35.479638  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-178758
	
	I0127 14:19:35.479668  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.482449  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.482790  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.482826  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.483088  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.483227  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.483407  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.483550  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.483695  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:35.483894  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:35.483914  608170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-178758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-178758/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-178758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:19:35.589368  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:19:35.589395  608170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:19:35.589447  608170 buildroot.go:174] setting up certificates
	I0127 14:19:35.589461  608170 provision.go:84] configureAuth start
	I0127 14:19:35.589479  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetMachineName
	I0127 14:19:35.589729  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:19:35.591922  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.592210  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.592239  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.592416  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.594555  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.594995  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.595035  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.595205  608170 provision.go:143] copyHostCerts
	I0127 14:19:35.595251  608170 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:19:35.595267  608170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:19:35.595340  608170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:19:35.595437  608170 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:19:35.595446  608170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:19:35.595485  608170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:19:35.595552  608170 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:19:35.595558  608170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:19:35.595578  608170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:19:35.595635  608170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-178758 san=[127.0.0.1 192.168.50.187 default-k8s-diff-port-178758 localhost minikube]
	I0127 14:19:35.797598  608170 provision.go:177] copyRemoteCerts
	I0127 14:19:35.797647  608170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:19:35.797669  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.800031  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.800367  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.800408  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.800530  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.800708  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.800864  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.800995  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:35.879882  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:19:35.904067  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 14:19:35.927462  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:19:35.950865  608170 provision.go:87] duration metric: took 361.38608ms to configureAuth
	I0127 14:19:35.950895  608170 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:19:35.951065  608170 config.go:182] Loaded profile config "default-k8s-diff-port-178758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:19:35.951168  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:35.953623  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.953973  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:35.954005  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:35.954162  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:35.954342  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.954507  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:35.954632  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:35.954780  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:35.954963  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:35.954980  608170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:19:36.163682  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:19:36.163720  608170 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:19:36.163729  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetURL
	I0127 14:19:36.164928  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | using libvirt version 6000000
	I0127 14:19:36.167087  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.167349  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.167381  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.167467  608170 main.go:141] libmachine: Docker is up and running!
	I0127 14:19:36.167481  608170 main.go:141] libmachine: Reticulating splines...
	I0127 14:19:36.167492  608170 client.go:171] duration metric: took 24.909104856s to LocalClient.Create
	I0127 14:19:36.167520  608170 start.go:167] duration metric: took 24.90917032s to libmachine.API.Create "default-k8s-diff-port-178758"
	I0127 14:19:36.167534  608170 start.go:293] postStartSetup for "default-k8s-diff-port-178758" (driver="kvm2")
	I0127 14:19:36.167548  608170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:19:36.167574  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:36.167846  608170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:19:36.167889  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:36.169769  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.170022  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.170061  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.170117  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:36.170316  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:36.170457  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:36.170611  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:36.251148  608170 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:19:36.255282  608170 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:19:36.255304  608170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:19:36.255365  608170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:19:36.255478  608170 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:19:36.255605  608170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:19:36.264533  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:19:36.287500  608170 start.go:296] duration metric: took 119.94298ms for postStartSetup
	I0127 14:19:36.287555  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetConfigRaw
	I0127 14:19:36.288171  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:19:36.291057  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.291436  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.291467  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.291724  608170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/config.json ...
	I0127 14:19:36.291901  608170 start.go:128] duration metric: took 25.051136986s to createHost
	I0127 14:19:36.291922  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:36.294133  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.294535  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.294572  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.294681  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:36.294881  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:36.295057  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:36.295244  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:36.295398  608170 main.go:141] libmachine: Using SSH client type: native
	I0127 14:19:36.295560  608170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0127 14:19:36.295571  608170 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:19:36.394393  608170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737987576.353015500
	
	I0127 14:19:36.394419  608170 fix.go:216] guest clock: 1737987576.353015500
	I0127 14:19:36.394429  608170 fix.go:229] Guest: 2025-01-27 14:19:36.3530155 +0000 UTC Remote: 2025-01-27 14:19:36.291910752 +0000 UTC m=+25.160235431 (delta=61.104748ms)
	I0127 14:19:36.394456  608170 fix.go:200] guest clock delta is within tolerance: 61.104748ms
	I0127 14:19:36.394462  608170 start.go:83] releasing machines lock for "default-k8s-diff-port-178758", held for 25.153785043s
	I0127 14:19:36.394481  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:36.394734  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:19:36.396999  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.397305  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.397347  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.397465  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:36.398123  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:36.398338  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:36.398438  608170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:19:36.398494  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:36.398533  608170 ssh_runner.go:195] Run: cat /version.json
	I0127 14:19:36.398554  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:36.401053  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.401363  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.401398  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.401423  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.401554  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:36.401717  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:36.401896  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:36.401910  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:36.401937  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:36.402048  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:36.402096  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:36.402248  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:36.402403  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:36.402551  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:36.495906  608170 ssh_runner.go:195] Run: systemctl --version
	I0127 14:19:36.501433  608170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:19:36.661972  608170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:19:36.668577  608170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:19:36.668639  608170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:19:36.684756  608170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:19:36.684773  608170 start.go:495] detecting cgroup driver to use...
	I0127 14:19:36.684838  608170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:19:36.700533  608170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:19:36.714084  608170 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:19:36.714125  608170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:19:36.726885  608170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:19:36.739456  608170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:19:36.850484  608170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:19:36.981617  608170 docker.go:233] disabling docker service ...
	I0127 14:19:36.981710  608170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:19:36.997828  608170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:19:37.010527  608170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:19:37.143237  608170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:19:37.263155  608170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:19:37.277510  608170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:19:37.297816  608170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:19:37.297879  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.308323  608170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:19:37.308374  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.318891  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.329171  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.339708  608170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:19:37.350382  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.360412  608170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.376612  608170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:19:37.386759  608170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:19:37.396434  608170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:19:37.396495  608170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:19:37.408815  608170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:19:37.418455  608170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:37.535006  608170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:19:37.625484  608170 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:19:37.625566  608170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:19:37.630479  608170 start.go:563] Will wait 60s for crictl version
	I0127 14:19:37.630532  608170 ssh_runner.go:195] Run: which crictl
	I0127 14:19:37.634575  608170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:19:37.684191  608170 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:19:37.684276  608170 ssh_runner.go:195] Run: crio --version
	I0127 14:19:37.716507  608170 ssh_runner.go:195] Run: crio --version
	I0127 14:19:37.745790  608170 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:19:37.196563  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:19:37.196805  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:19:37.196823  604817 kubeadm.go:310] 
	I0127 14:19:37.196876  604817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:19:37.196937  604817 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:19:37.196947  604817 kubeadm.go:310] 
	I0127 14:19:37.196991  604817 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:19:37.197037  604817 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:19:37.197184  604817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:19:37.197209  604817 kubeadm.go:310] 
	I0127 14:19:37.197355  604817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:19:37.197418  604817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:19:37.197460  604817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:19:37.197471  604817 kubeadm.go:310] 
	I0127 14:19:37.197639  604817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:19:37.197760  604817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:19:37.197770  604817 kubeadm.go:310] 
	I0127 14:19:37.197916  604817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:19:37.198059  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:19:37.198182  604817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:19:37.198298  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:19:37.198309  604817 kubeadm.go:310] 
	I0127 14:19:37.199197  604817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:19:37.199337  604817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:19:37.199443  604817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 14:19:37.199632  604817 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 14:19:37.199681  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:19:37.668519  604817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:19:37.683765  604817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:37.697989  604817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:37.698007  604817 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:37.698049  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:37.707769  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:37.707831  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:37.718041  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:37.727763  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:37.727819  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:37.738011  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:37.748442  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:37.748490  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:37.761442  604817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:37.774356  604817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:37.774399  604817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:37.784967  604817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:19:37.871087  604817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 14:19:37.871175  604817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:19:38.032414  604817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:19:38.032565  604817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:19:38.032734  604817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 14:19:38.273161  604817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:19:38.275762  604817 out.go:235]   - Generating certificates and keys ...
	I0127 14:19:38.275888  604817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:19:38.275984  604817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:19:38.276125  604817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:19:38.276222  604817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:19:38.276321  604817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:19:38.276396  604817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:19:38.276481  604817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:19:38.277362  604817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:19:38.279167  604817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:19:38.281048  604817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:19:38.281114  604817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:19:38.281187  604817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:19:38.445908  604817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:19:38.778696  604817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:19:38.962115  604817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:19:39.051249  604817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:19:39.083155  604817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:19:39.083281  604817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:19:39.083362  604817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:19:39.269395  604817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:19:39.270936  604817 out.go:235]   - Booting up control plane ...
	I0127 14:19:39.271056  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:19:39.280186  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:19:39.281499  604817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:19:39.282385  604817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:19:39.289618  604817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 14:19:37.746972  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetIP
	I0127 14:19:37.750542  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:37.750981  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:37.751019  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:37.751269  608170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 14:19:37.756637  608170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:37.773984  608170 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178
758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:19:37.774151  608170 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:19:37.774211  608170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:37.808029  608170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:19:37.808092  608170 ssh_runner.go:195] Run: which lz4
	I0127 14:19:37.812241  608170 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:19:37.816411  608170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:19:37.816442  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:19:39.267498  608170 crio.go:462] duration metric: took 1.455299048s to copy over tarball
	I0127 14:19:39.267596  608170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:19:41.317390  608170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.049760074s)
	I0127 14:19:41.317475  608170 crio.go:469] duration metric: took 2.04993542s to extract the tarball
	I0127 14:19:41.317514  608170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:19:41.356052  608170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:19:41.398396  608170 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:19:41.398420  608170 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:19:41.398428  608170 kubeadm.go:934] updating node { 192.168.50.187 8444 v1.32.1 crio true true} ...
	I0127 14:19:41.398536  608170 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-178758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:19:41.398607  608170 ssh_runner.go:195] Run: crio config
	I0127 14:19:41.444751  608170 cni.go:84] Creating CNI manager for ""
	I0127 14:19:41.444775  608170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:19:41.444784  608170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:19:41.444805  608170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.187 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-178758 NodeName:default-k8s-diff-port-178758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:19:41.444928  608170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.187
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-178758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:19:41.444985  608170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:19:41.455965  608170 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:19:41.456042  608170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:19:41.466240  608170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0127 14:19:41.482803  608170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:19:41.498677  608170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 14:19:41.515093  608170 ssh_runner.go:195] Run: grep 192.168.50.187	control-plane.minikube.internal$ /etc/hosts
	I0127 14:19:41.519002  608170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:19:41.533738  608170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:41.671772  608170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:41.688369  608170 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758 for IP: 192.168.50.187
	I0127 14:19:41.688397  608170 certs.go:194] generating shared ca certs ...
	I0127 14:19:41.688420  608170 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:41.688598  608170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:19:41.688663  608170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:19:41.688679  608170 certs.go:256] generating profile certs ...
	I0127 14:19:41.688751  608170 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.key
	I0127 14:19:41.688772  608170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt with IP's: []
	I0127 14:19:41.798876  608170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt ...
	I0127 14:19:41.798907  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: {Name:mkf8a828d06b802815329c28df5f51c9bca42d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:41.799128  608170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.key ...
	I0127 14:19:41.799155  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.key: {Name:mkc8b096b0fcc4d9fb51d3a8f3f8cb529dd55ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:41.799293  608170 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key.3789323f
	I0127 14:19:41.799316  608170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt.3789323f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.187]
	I0127 14:19:41.946303  608170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt.3789323f ...
	I0127 14:19:41.946331  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt.3789323f: {Name:mk7bfe44ec7c9bf7696acf61ad43f3f798a43a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:41.946488  608170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key.3789323f ...
	I0127 14:19:41.946500  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key.3789323f: {Name:mkf554350198f721eb9644fe2f592fb6206d2200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:41.946568  608170 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt.3789323f -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt
	I0127 14:19:41.946654  608170 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key.3789323f -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key
	I0127 14:19:41.946712  608170 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key
	I0127 14:19:41.946729  608170 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.crt with IP's: []
	I0127 14:19:42.048297  608170 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.crt ...
	I0127 14:19:42.048322  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.crt: {Name:mk10841125da50664e86403203af2166c654d689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:42.048458  608170 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key ...
	I0127 14:19:42.048470  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key: {Name:mk4c67802c5c3557118dddf959e1529571fd0844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:42.048632  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:19:42.048665  608170 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:19:42.048679  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:19:42.048706  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:19:42.048735  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:19:42.048756  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:19:42.048793  608170 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:19:42.049469  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:19:42.075127  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:19:42.101809  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:19:42.128516  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:19:42.152595  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 14:19:42.179198  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:19:42.203598  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:19:42.227827  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 14:19:42.255858  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:19:42.279361  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:19:42.302236  608170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:19:42.325058  608170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:19:42.341414  608170 ssh_runner.go:195] Run: openssl version
	I0127 14:19:42.349119  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:19:42.363698  608170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:42.369543  608170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:42.369611  608170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:19:42.377347  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:19:42.387880  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:19:42.401276  608170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:19:42.407226  608170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:19:42.407288  608170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:19:42.414742  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:19:42.424972  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:19:42.435450  608170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:19:42.440052  608170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:19:42.440112  608170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:19:42.446005  608170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:19:42.460186  608170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:19:42.464752  608170 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:19:42.464816  608170 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-178758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-178758
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:19:42.464921  608170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:19:42.465003  608170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:19:42.537434  608170 cri.go:89] found id: ""
	I0127 14:19:42.537519  608170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:19:42.551546  608170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:19:42.565725  608170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:42.577963  608170 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:42.577979  608170 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:42.578019  608170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 14:19:42.586699  608170 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:42.586743  608170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:42.595529  608170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 14:19:42.604127  608170 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:42.604180  608170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:42.613269  608170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 14:19:42.622493  608170 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:42.622529  608170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:42.631369  608170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 14:19:42.641990  608170 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:42.642043  608170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:42.651323  608170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:19:42.774731  608170 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:19:42.774968  608170 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:19:42.887694  608170 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:19:42.887885  608170 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:19:42.888031  608170 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:19:42.897478  608170 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:19:41.708450  603347 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000334136s
	I0127 14:19:41.708494  603347 kubeadm.go:310] 
	I0127 14:19:41.708554  603347 kubeadm.go:310] Unfortunately, an error has occurred:
	I0127 14:19:41.708595  603347 kubeadm.go:310] 	context deadline exceeded
	I0127 14:19:41.708605  603347 kubeadm.go:310] 
	I0127 14:19:41.708688  603347 kubeadm.go:310] This error is likely caused by:
	I0127 14:19:41.708768  603347 kubeadm.go:310] 	- The kubelet is not running
	I0127 14:19:41.708939  603347 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:19:41.708971  603347 kubeadm.go:310] 
	I0127 14:19:41.709129  603347 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:19:41.709188  603347 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0127 14:19:41.709238  603347 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0127 14:19:41.709248  603347 kubeadm.go:310] 
	I0127 14:19:41.709379  603347 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:19:41.709487  603347 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:19:41.709621  603347 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0127 14:19:41.709806  603347 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:19:41.709925  603347 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0127 14:19:41.710033  603347 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:19:41.711496  603347 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:19:41.711628  603347 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0127 14:19:41.711751  603347 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 14:19:41.711948  603347 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.003285792s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000334136s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 14:19:41.711998  603347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 14:19:44.331034  603347 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.618992916s)
	I0127 14:19:44.331133  603347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:19:44.348265  603347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:19:44.358902  603347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:19:44.358921  603347 kubeadm.go:157] found existing configuration files:
	
	I0127 14:19:44.358968  603347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:19:44.368180  603347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:19:44.368232  603347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:19:44.377808  603347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:19:44.386968  603347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:19:44.387009  603347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:19:44.396246  603347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:19:44.404791  603347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:19:44.404833  603347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:19:44.414511  603347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:19:44.422945  603347 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:19:44.422990  603347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:19:44.431919  603347 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:19:44.485298  603347 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:19:44.485415  603347 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:19:44.600724  603347 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:19:44.600909  603347 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:19:44.601075  603347 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:19:44.608740  603347 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:19:44.659741  603347 out.go:235]   - Generating certificates and keys ...
	I0127 14:19:44.659859  603347 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:19:44.659934  603347 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:19:44.660026  603347 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 14:19:44.660099  603347 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 14:19:44.660203  603347 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 14:19:44.660268  603347 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 14:19:44.660379  603347 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 14:19:44.660455  603347 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 14:19:44.660545  603347 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 14:19:44.660633  603347 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 14:19:44.660683  603347 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 14:19:44.660753  603347 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:19:44.700726  603347 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:19:44.806826  603347 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:19:44.893687  603347 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:19:45.046966  603347 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:19:45.283768  603347 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:19:45.284446  603347 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:19:45.287225  603347 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:19:43.059616  608170 out.go:235]   - Generating certificates and keys ...
	I0127 14:19:43.059748  608170 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:19:43.059824  608170 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:19:43.084413  608170 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:19:43.594826  608170 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:19:43.705846  608170 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:19:43.932790  608170 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:19:44.039121  608170 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:19:44.039396  608170 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-178758 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0127 14:19:44.622787  608170 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:19:44.623019  608170 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-178758 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0127 14:19:44.831543  608170 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:19:45.055734  608170 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:19:45.503209  608170 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:19:45.503909  608170 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:19:45.872720  608170 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:19:45.952573  608170 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:19:46.225850  608170 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:19:46.505067  608170 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:19:46.947441  608170 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:19:46.948084  608170 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:19:46.950737  608170 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:19:45.368528  603347 out.go:235]   - Booting up control plane ...
	I0127 14:19:45.368682  603347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:19:45.368806  603347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:19:45.368930  603347 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:19:45.369154  603347 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:19:45.369288  603347 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:19:45.369355  603347 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:19:45.478334  603347 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:19:45.478503  603347 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:19:46.482298  603347 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004096525s
	I0127 14:19:46.482419  603347 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:19:46.952250  608170 out.go:235]   - Booting up control plane ...
	I0127 14:19:46.952377  608170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:19:46.952471  608170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:19:46.952574  608170 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:19:46.976087  608170 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:19:46.983571  608170 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:19:46.983659  608170 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:19:47.116031  608170 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:19:47.116199  608170 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:19:47.617136  608170 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.064622ms
	I0127 14:19:47.617259  608170 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:19:52.619566  608170 kubeadm.go:310] [api-check] The API server is healthy after 5.002071323s
	I0127 14:19:52.639854  608170 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:19:52.651354  608170 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:19:52.674477  608170 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:19:52.674700  608170 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-178758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:19:52.683417  608170 kubeadm.go:310] [bootstrap-token] Using token: gsy3dg.yoebococ6f633odf
	I0127 14:19:52.684613  608170 out.go:235]   - Configuring RBAC rules ...
	I0127 14:19:52.684730  608170 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:19:52.691475  608170 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:19:52.697296  608170 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:19:52.700308  608170 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:19:52.703327  608170 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:19:52.707394  608170 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:19:53.027032  608170 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:19:53.479581  608170 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:19:54.027672  608170 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:19:54.027720  608170 kubeadm.go:310] 
	I0127 14:19:54.027781  608170 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:19:54.027798  608170 kubeadm.go:310] 
	I0127 14:19:54.027896  608170 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:19:54.027910  608170 kubeadm.go:310] 
	I0127 14:19:54.027936  608170 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:19:54.028009  608170 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:19:54.028060  608170 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:19:54.028067  608170 kubeadm.go:310] 
	I0127 14:19:54.028113  608170 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:19:54.028121  608170 kubeadm.go:310] 
	I0127 14:19:54.028189  608170 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:19:54.028200  608170 kubeadm.go:310] 
	I0127 14:19:54.028271  608170 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:19:54.028370  608170 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:19:54.028471  608170 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:19:54.028481  608170 kubeadm.go:310] 
	I0127 14:19:54.028592  608170 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:19:54.028711  608170 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:19:54.028743  608170 kubeadm.go:310] 
	I0127 14:19:54.028865  608170 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gsy3dg.yoebococ6f633odf \
	I0127 14:19:54.029004  608170 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 \
	I0127 14:19:54.029038  608170 kubeadm.go:310] 	--control-plane 
	I0127 14:19:54.029048  608170 kubeadm.go:310] 
	I0127 14:19:54.029173  608170 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:19:54.029185  608170 kubeadm.go:310] 
	I0127 14:19:54.029297  608170 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gsy3dg.yoebococ6f633odf \
	I0127 14:19:54.029441  608170 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 
	I0127 14:19:54.030061  608170 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:19:54.030138  608170 cni.go:84] Creating CNI manager for ""
	I0127 14:19:54.030152  608170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:19:54.031502  608170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:19:54.032629  608170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:19:54.046095  608170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:19:54.069845  608170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:19:54.069971  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:54.069995  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-178758 minikube.k8s.io/updated_at=2025_01_27T14_19_54_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=default-k8s-diff-port-178758 minikube.k8s.io/primary=true
	I0127 14:19:54.110754  608170 ops.go:34] apiserver oom_adj: -16
	I0127 14:19:54.298274  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:54.799241  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:55.298487  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:55.798497  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:56.298676  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:56.798805  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:57.298820  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:57.799324  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:58.298441  608170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:19:58.387522  608170 kubeadm.go:1113] duration metric: took 4.317602581s to wait for elevateKubeSystemPrivileges
	I0127 14:19:58.387555  608170 kubeadm.go:394] duration metric: took 15.922745282s to StartCluster
	I0127 14:19:58.387581  608170 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:58.387669  608170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:19:58.389496  608170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:19:58.389784  608170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:19:58.389794  608170 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.187 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:19:58.389864  608170 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:19:58.389975  608170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-178758"
	I0127 14:19:58.389995  608170 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-178758"
	I0127 14:19:58.390010  608170 config.go:182] Loaded profile config "default-k8s-diff-port-178758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:19:58.390027  608170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-178758"
	I0127 14:19:58.390028  608170 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:19:58.390083  608170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-178758"
	I0127 14:19:58.390586  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:58.390612  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:58.390634  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:58.390669  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:58.391237  608170 out.go:177] * Verifying Kubernetes components...
	I0127 14:19:58.392793  608170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:19:58.412362  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I0127 14:19:58.412398  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0127 14:19:58.412838  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:58.412915  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:58.413476  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:58.413491  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:58.413699  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:58.413737  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:58.413911  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:58.414118  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:58.414145  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:19:58.414693  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:58.414727  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:58.418637  608170 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-178758"
	I0127 14:19:58.418686  608170 host.go:66] Checking if "default-k8s-diff-port-178758" exists ...
	I0127 14:19:58.419076  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:58.419111  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:58.430866  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0127 14:19:58.431331  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:58.431872  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:58.431899  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:58.432367  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:58.432575  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:19:58.434514  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:58.434912  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0127 14:19:58.435355  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:58.436152  608170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:19:58.437298  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:58.437333  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:58.437712  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:58.438252  608170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:19:58.438296  608170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:19:58.442256  608170 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:58.442280  608170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:19:58.442295  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:58.445613  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:58.446042  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:58.446070  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:58.446425  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:58.446578  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:58.446736  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:58.446854  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:58.454637  608170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44689
	I0127 14:19:58.454982  608170 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:19:58.455435  608170 main.go:141] libmachine: Using API Version  1
	I0127 14:19:58.455453  608170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:19:58.455740  608170 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:19:58.455938  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetState
	I0127 14:19:58.457459  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .DriverName
	I0127 14:19:58.457688  608170 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:58.457707  608170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:19:58.457729  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHHostname
	I0127 14:19:58.460339  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:58.460730  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:12:0f", ip: ""} in network mk-default-k8s-diff-port-178758: {Iface:virbr4 ExpiryTime:2025-01-27 15:19:25 +0000 UTC Type:0 Mac:52:54:00:9e:12:0f Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:default-k8s-diff-port-178758 Clientid:01:52:54:00:9e:12:0f}
	I0127 14:19:58.460771  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | domain default-k8s-diff-port-178758 has defined IP address 192.168.50.187 and MAC address 52:54:00:9e:12:0f in network mk-default-k8s-diff-port-178758
	I0127 14:19:58.460890  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHPort
	I0127 14:19:58.461047  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHKeyPath
	I0127 14:19:58.461214  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .GetSSHUsername
	I0127 14:19:58.461346  608170 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/default-k8s-diff-port-178758/id_rsa Username:docker}
	I0127 14:19:58.815899  608170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:19:58.845979  608170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:19:58.931012  608170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:19:58.931206  608170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:19:59.493488  608170 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:59.493516  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:19:59.493855  608170 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:59.493881  608170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:59.493892  608170 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:59.493902  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:19:59.494006  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:19:59.494145  608170 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:59.494161  608170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:19:59.494196  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:19:59.531053  608170 main.go:141] libmachine: Making call to close driver server
	I0127 14:19:59.531071  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:19:59.531368  608170 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:19:59.531396  608170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:00.036801  608170 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.105749436s)
	I0127 14:20:00.036878  608170 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.105631179s)
	I0127 14:20:00.036910  608170 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0127 14:20:00.037083  608170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191064039s)
	I0127 14:20:00.037131  608170 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:00.037157  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:20:00.037692  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:20:00.037727  608170 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:00.037749  608170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:00.037763  608170 main.go:141] libmachine: Making call to close driver server
	I0127 14:20:00.037771  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) Calling .Close
	I0127 14:20:00.038137  608170 main.go:141] libmachine: (default-k8s-diff-port-178758) DBG | Closing plugin on server side
	I0127 14:20:00.038178  608170 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:20:00.038186  608170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:20:00.038999  608170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-178758" to be "Ready" ...
	I0127 14:20:00.039822  608170 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 14:20:00.041047  608170 addons.go:514] duration metric: took 1.65118266s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 14:20:00.047190  608170 node_ready.go:49] node "default-k8s-diff-port-178758" has status "Ready":"True"
	I0127 14:20:00.047212  608170 node_ready.go:38] duration metric: took 8.147239ms for node "default-k8s-diff-port-178758" to be "Ready" ...
	I0127 14:20:00.047225  608170 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:00.064095  608170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:00.541553  608170 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-178758" context rescaled to 1 replicas
	I0127 14:20:02.070743  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:04.571949  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:07.070428  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:09.070587  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:10.566428  608170 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-cs5vk" not found
	I0127 14:20:10.566459  608170 pod_ready.go:82] duration metric: took 10.502339443s for pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace to be "Ready" ...
	E0127 14:20:10.566474  608170 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-cs5vk" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-cs5vk" not found
	I0127 14:20:10.566484  608170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:12.573838  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:14.574186  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:19.292572  604817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 14:20:19.292661  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:19.292824  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:16.574360  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:18.574390  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:20.576206  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:24.293094  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:24.293416  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:23.072392  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:25.073549  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:27.574007  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:30.072448  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:34.294153  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:34.294367  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:20:32.075313  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:34.573228  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.072770  608170 pod_ready.go:103] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"False"
	I0127 14:20:37.573524  608170 pod_ready.go:93] pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.573555  608170 pod_ready.go:82] duration metric: took 27.007062445s for pod "coredns-668d6bf9bc-nxbp7" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.573570  608170 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.577782  608170 pod_ready.go:93] pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.577802  608170 pod_ready.go:82] duration metric: took 4.224911ms for pod "etcd-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.577810  608170 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.581968  608170 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.581985  608170 pod_ready.go:82] duration metric: took 4.168349ms for pod "kube-apiserver-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.581997  608170 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.586609  608170 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.586626  608170 pod_ready.go:82] duration metric: took 4.622798ms for pod "kube-controller-manager-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.586635  608170 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.591301  608170 pod_ready.go:93] pod "kube-proxy-h9dzd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.591316  608170 pod_ready.go:82] duration metric: took 4.67612ms for pod "kube-proxy-h9dzd" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.591324  608170 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.970780  608170 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace has status "Ready":"True"
	I0127 14:20:37.970809  608170 pod_ready.go:82] duration metric: took 379.477498ms for pod "kube-scheduler-default-k8s-diff-port-178758" in "kube-system" namespace to be "Ready" ...
	I0127 14:20:37.970824  608170 pod_ready.go:39] duration metric: took 37.923585497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:20:37.970846  608170 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:20:37.970909  608170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:20:37.990489  608170 api_server.go:72] duration metric: took 39.600651965s to wait for apiserver process to appear ...
	I0127 14:20:37.990518  608170 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:20:37.990543  608170 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8444/healthz ...
	I0127 14:20:37.996246  608170 api_server.go:279] https://192.168.50.187:8444/healthz returned 200:
	ok
	I0127 14:20:37.997206  608170 api_server.go:141] control plane version: v1.32.1
	I0127 14:20:37.997230  608170 api_server.go:131] duration metric: took 6.703673ms to wait for apiserver health ...
	I0127 14:20:37.997240  608170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:20:38.173876  608170 system_pods.go:59] 7 kube-system pods found
	I0127 14:20:38.173907  608170 system_pods.go:61] "coredns-668d6bf9bc-nxbp7" [1598d49e-31bd-4040-9517-342c41bdbfbb] Running
	I0127 14:20:38.173913  608170 system_pods.go:61] "etcd-default-k8s-diff-port-178758" [677f51de-78d4-4fcf-a379-aa2cdeae5c94] Running
	I0127 14:20:38.173917  608170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-178758" [485b927e-cb6d-44f2-a8a3-99e9b04eb683] Running
	I0127 14:20:38.173920  608170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-178758" [a1c04b1b-a73f-4278-ba51-cf849f495fad] Running
	I0127 14:20:38.173924  608170 system_pods.go:61] "kube-proxy-h9dzd" [6014094a-3b42-457c-a06b-9432d1029225] Running
	I0127 14:20:38.173927  608170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-178758" [7b3dbf93-f770-444b-a0b7-2fd807faef6c] Running
	I0127 14:20:38.173930  608170 system_pods.go:61] "storage-provisioner" [e4090d6b-233e-4053-a355-3ad858d5b9b4] Running
	I0127 14:20:38.173936  608170 system_pods.go:74] duration metric: took 176.689184ms to wait for pod list to return data ...
	I0127 14:20:38.173944  608170 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:20:38.371376  608170 default_sa.go:45] found service account: "default"
	I0127 14:20:38.371410  608170 default_sa.go:55] duration metric: took 197.458921ms for default service account to be created ...
	I0127 14:20:38.371420  608170 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:20:38.573861  608170 system_pods.go:87] 7 kube-system pods found
	I0127 14:20:38.771446  608170 system_pods.go:105] "coredns-668d6bf9bc-nxbp7" [1598d49e-31bd-4040-9517-342c41bdbfbb] Running
	I0127 14:20:38.771472  608170 system_pods.go:105] "etcd-default-k8s-diff-port-178758" [677f51de-78d4-4fcf-a379-aa2cdeae5c94] Running
	I0127 14:20:38.771477  608170 system_pods.go:105] "kube-apiserver-default-k8s-diff-port-178758" [485b927e-cb6d-44f2-a8a3-99e9b04eb683] Running
	I0127 14:20:38.771483  608170 system_pods.go:105] "kube-controller-manager-default-k8s-diff-port-178758" [a1c04b1b-a73f-4278-ba51-cf849f495fad] Running
	I0127 14:20:38.771490  608170 system_pods.go:105] "kube-proxy-h9dzd" [6014094a-3b42-457c-a06b-9432d1029225] Running
	I0127 14:20:38.771497  608170 system_pods.go:105] "kube-scheduler-default-k8s-diff-port-178758" [7b3dbf93-f770-444b-a0b7-2fd807faef6c] Running
	I0127 14:20:38.771505  608170 system_pods.go:105] "storage-provisioner" [e4090d6b-233e-4053-a355-3ad858d5b9b4] Running
	I0127 14:20:38.771516  608170 system_pods.go:147] duration metric: took 400.087514ms to wait for k8s-apps to be running ...
	I0127 14:20:38.771526  608170 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:20:38.771590  608170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:20:38.788546  608170 system_svc.go:56] duration metric: took 17.010204ms WaitForService to wait for kubelet
	I0127 14:20:38.788573  608170 kubeadm.go:582] duration metric: took 40.398742674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:20:38.788594  608170 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:20:38.971424  608170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:20:38.971454  608170 node_conditions.go:123] node cpu capacity is 2
	I0127 14:20:38.971470  608170 node_conditions.go:105] duration metric: took 182.871105ms to run NodePressure ...
	I0127 14:20:38.971483  608170 start.go:241] waiting for startup goroutines ...
	I0127 14:20:38.971490  608170 start.go:246] waiting for cluster config update ...
	I0127 14:20:38.971502  608170 start.go:255] writing updated cluster config ...
	I0127 14:20:38.971774  608170 ssh_runner.go:195] Run: rm -f paused
	I0127 14:20:39.024446  608170 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:20:39.026405  608170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-178758" cluster and "default" namespace by default
	I0127 14:20:54.295205  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:20:54.295461  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:21:34.296827  604817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 14:21:34.297079  604817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 14:21:34.297109  604817 kubeadm.go:310] 
	I0127 14:21:34.297169  604817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 14:21:34.297220  604817 kubeadm.go:310] 		timed out waiting for the condition
	I0127 14:21:34.297231  604817 kubeadm.go:310] 
	I0127 14:21:34.297278  604817 kubeadm.go:310] 	This error is likely caused by:
	I0127 14:21:34.297325  604817 kubeadm.go:310] 		- The kubelet is not running
	I0127 14:21:34.297447  604817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 14:21:34.297468  604817 kubeadm.go:310] 
	I0127 14:21:34.297632  604817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 14:21:34.297677  604817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 14:21:34.297717  604817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 14:21:34.297728  604817 kubeadm.go:310] 
	I0127 14:21:34.297894  604817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 14:21:34.298028  604817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 14:21:34.298043  604817 kubeadm.go:310] 
	I0127 14:21:34.298170  604817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 14:21:34.298253  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 14:21:34.298316  604817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 14:21:34.298397  604817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 14:21:34.298408  604817 kubeadm.go:310] 
	I0127 14:21:34.299114  604817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:21:34.299216  604817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 14:21:34.299291  604817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 14:21:34.299356  604817 kubeadm.go:394] duration metric: took 7m57.571006925s to StartCluster
	I0127 14:21:34.299406  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 14:21:34.299474  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 14:21:34.355761  604817 cri.go:89] found id: ""
	I0127 14:21:34.355787  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.355798  604817 logs.go:284] No container was found matching "kube-apiserver"
	I0127 14:21:34.355807  604817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 14:21:34.355871  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 14:21:34.395943  604817 cri.go:89] found id: ""
	I0127 14:21:34.395967  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.395977  604817 logs.go:284] No container was found matching "etcd"
	I0127 14:21:34.395985  604817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 14:21:34.396045  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 14:21:34.435060  604817 cri.go:89] found id: ""
	I0127 14:21:34.435078  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.435098  604817 logs.go:284] No container was found matching "coredns"
	I0127 14:21:34.435117  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 14:21:34.435190  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 14:21:34.471426  604817 cri.go:89] found id: ""
	I0127 14:21:34.471450  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.471461  604817 logs.go:284] No container was found matching "kube-scheduler"
	I0127 14:21:34.471469  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 14:21:34.471528  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 14:21:34.505950  604817 cri.go:89] found id: ""
	I0127 14:21:34.505976  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.505984  604817 logs.go:284] No container was found matching "kube-proxy"
	I0127 14:21:34.505990  604817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 14:21:34.506043  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 14:21:34.539754  604817 cri.go:89] found id: ""
	I0127 14:21:34.539776  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.539784  604817 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 14:21:34.539789  604817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 14:21:34.539841  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 14:21:34.571093  604817 cri.go:89] found id: ""
	I0127 14:21:34.571120  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.571134  604817 logs.go:284] No container was found matching "kindnet"
	I0127 14:21:34.571139  604817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 14:21:34.571186  604817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 14:21:34.608370  604817 cri.go:89] found id: ""
	I0127 14:21:34.608395  604817 logs.go:282] 0 containers: []
	W0127 14:21:34.608404  604817 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 14:21:34.608427  604817 logs.go:123] Gathering logs for kubelet ...
	I0127 14:21:34.608442  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 14:21:34.662214  604817 logs.go:123] Gathering logs for dmesg ...
	I0127 14:21:34.662239  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 14:21:34.675535  604817 logs.go:123] Gathering logs for describe nodes ...
	I0127 14:21:34.675559  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 14:21:34.750391  604817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 14:21:34.750415  604817 logs.go:123] Gathering logs for CRI-O ...
	I0127 14:21:34.750429  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 14:21:34.851544  604817 logs.go:123] Gathering logs for container status ...
	I0127 14:21:34.851575  604817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 14:21:34.919115  604817 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 14:21:34.919173  604817 out.go:270] * 
	W0127 14:21:34.919254  604817 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:21:34.919275  604817 out.go:270] * 
	W0127 14:21:34.920116  604817 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 14:21:34.923401  604817 out.go:201] 
	W0127 14:21:34.924638  604817 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 14:21:34.924682  604817 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 14:21:34.924709  604817 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 14:21:34.926036  604817 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.886475508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987695886407656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7228729-a3c5-4b67-a7df-65fcce9cb156 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.886942861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a067f0-0508-4224-9815-76d507c4675b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.887016712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a067f0-0508-4224-9815-76d507c4675b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.887052848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7a067f0-0508-4224-9815-76d507c4675b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.920526277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa23642-297c-4791-a68f-15e4a80a62be name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.920609178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa23642-297c-4791-a68f-15e4a80a62be name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.922157661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b0ce301-88d1-4cd5-ab1a-53085ba2aff4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.922581399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987695922558919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b0ce301-88d1-4cd5-ab1a-53085ba2aff4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.923006311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76ab693d-8b60-4b0f-997f-8cbb7a41ff6d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.923078033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76ab693d-8b60-4b0f-997f-8cbb7a41ff6d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.923145876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=76ab693d-8b60-4b0f-997f-8cbb7a41ff6d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.955153338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d6feb37-8a36-4863-8dca-90e25461caff name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.955208865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d6feb37-8a36-4863-8dca-90e25461caff name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.956123523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77a1c73b-d5d0-4b74-8f3d-28ab2f1a65ff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.956510919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987695956486569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77a1c73b-d5d0-4b74-8f3d-28ab2f1a65ff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.957032925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6792b4b-5fd9-4e14-ac56-3303713afb4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.957076217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6792b4b-5fd9-4e14-ac56-3303713afb4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.957104284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e6792b4b-5fd9-4e14-ac56-3303713afb4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.987569590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f258aba2-32dc-45b1-a108-decfab959c09 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.987626934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f258aba2-32dc-45b1-a108-decfab959c09 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.988537257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1ead4a5-09db-48b8-b47b-9e5a2eaf12bc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.988857537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987695988842441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1ead4a5-09db-48b8-b47b-9e5a2eaf12bc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.989327793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea409440-a930-465a-8991-4ecd52ed5eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.989369211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea409440-a930-465a-8991-4ecd52ed5eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:21:35 old-k8s-version-456130 crio[624]: time="2025-01-27 14:21:35.989396540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ea409440-a930-465a-8991-4ecd52ed5eb1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 14:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051913] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040598] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.061734] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.852778] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.633968] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.949773] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.054597] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055817] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.196544] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.123926] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.248912] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.669224] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.071494] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.203893] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +13.783962] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 14:17] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Jan27 14:19] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.080632] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:21:36 up 8 min,  0 users,  load average: 0.06, 0.15, 0.08
	Linux old-k8s-version-456130 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc00090df80, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: net.cgoIPLookup(0xc000170f00, 0x48ab5d6, 0x3, 0xc00090df80, 0x1f)
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: created by net.cgoLookupIP
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: goroutine 123 [select]:
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000174d20, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00053e780, 0x0, 0x0)
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001e2e00)
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 14:21:34 old-k8s-version-456130 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 14:21:34 old-k8s-version-456130 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 14:21:34 old-k8s-version-456130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 14:21:34 old-k8s-version-456130 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 14:21:34 old-k8s-version-456130 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5604]: I0127 14:21:34.873570    5604 server.go:416] Version: v1.20.0
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5604]: I0127 14:21:34.873869    5604 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5604]: I0127 14:21:34.875723    5604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5604]: W0127 14:21:34.876671    5604 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 14:21:34 old-k8s-version-456130 kubelet[5604]: I0127 14:21:34.876975    5604 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (221.700242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-456130" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (506.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:22:11.198512  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:23:28.672771  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:24:27.335218  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:24:51.741467  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:24:55.040225  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:25:34.435055  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:28:28.672659  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:27.335284  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:44.414165  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:44.420929  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:44.432616  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:44.454653  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:44.496708  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:44.578466  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:44.740544  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:45.062396  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:45.704512  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:46.986402  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:49.548666  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:29:54.670632  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:04.912485  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:25.393905  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:34.434175  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (238.220707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-456130" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (229.542233ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-456130 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372 sudo cat                | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372 sudo cat                | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372 sudo cat                | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-418372                         | enable-default-cni-418372 | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC | 27 Jan 25 14:29 UTC |
	| start   | -p bridge-418372 --memory=3072                       | bridge-418372             | jenkins | v1.35.0 | 27 Jan 25 14:29 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:29:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:29:58.428259  619737 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:29:58.428355  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428363  619737 out.go:358] Setting ErrFile to fd 2...
	I0127 14:29:58.428369  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428556  619737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:29:58.429178  619737 out.go:352] Setting JSON to false
	I0127 14:29:58.430355  619737 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":18743,"bootTime":1737969455,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:29:58.430472  619737 start.go:139] virtualization: kvm guest
	I0127 14:29:58.432328  619737 out.go:177] * [bridge-418372] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:29:58.433847  619737 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:29:58.433841  619737 notify.go:220] Checking for updates...
	I0127 14:29:58.435064  619737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:29:58.436272  619737 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.437495  619737 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.438658  619737 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:29:54.794135  618007 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 14:29:54.800129  618007 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 14:29:54.800149  618007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0127 14:29:54.827977  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 14:29:55.354721  618007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:29:55.354799  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.354815  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-418372 minikube.k8s.io/updated_at=2025_01_27T14_29_55_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=flannel-418372 minikube.k8s.io/primary=true
	I0127 14:29:55.498477  618007 ops.go:34] apiserver oom_adj: -16
	I0127 14:29:55.498561  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.998885  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.499532  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.998893  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.499229  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.999484  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.440406  619737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:29:58.442063  619737 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442183  619737 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442310  619737 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:29:58.442439  619737 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:29:58.481913  619737 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:29:58.482984  619737 start.go:297] selected driver: kvm2
	I0127 14:29:58.482999  619737 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:29:58.483014  619737 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:29:58.483732  619737 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.483833  619737 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:29:58.500677  619737 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:29:58.500725  619737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:29:58.501048  619737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:29:58.501095  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:29:58.501112  619737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:29:58.501223  619737 start.go:340] cluster config:
	{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 14:29:58.501374  619737 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.502978  619737 out.go:177] * Starting "bridge-418372" primary control-plane node in "bridge-418372" cluster
	I0127 14:29:58.504138  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:29:58.504185  619737 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:29:58.504199  619737 cache.go:56] Caching tarball of preloaded images
	I0127 14:29:58.504311  619737 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:29:58.504327  619737 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:29:58.504450  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:29:58.504481  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json: {Name:mk097cf8466e36fa95d1648a8e56c4a0cdde1a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.504659  619737 start.go:360] acquireMachinesLock for bridge-418372: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:29:58.504713  619737 start.go:364] duration metric: took 30.62µs to acquireMachinesLock for "bridge-418372"
	I0127 14:29:58.504739  619737 start.go:93] Provisioning new machine with config: &{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.504825  619737 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:29:58.499356  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.598508  618007 kubeadm.go:1113] duration metric: took 3.243774581s to wait for elevateKubeSystemPrivileges
	I0127 14:29:58.598548  618007 kubeadm.go:394] duration metric: took 14.302797004s to StartCluster
	I0127 14:29:58.598576  618007 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.598660  618007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.600178  618007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.600419  618007 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.600467  618007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:29:58.600563  618007 addons.go:69] Setting storage-provisioner=true in profile "flannel-418372"
	I0127 14:29:58.600580  618007 addons.go:238] Setting addon storage-provisioner=true in "flannel-418372"
	I0127 14:29:58.600619  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.600452  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:29:58.600644  618007 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.600634  618007 addons.go:69] Setting default-storageclass=true in profile "flannel-418372"
	I0127 14:29:58.600706  618007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-418372"
	I0127 14:29:58.601115  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601158  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.601205  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601251  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.602065  618007 out.go:177] * Verifying Kubernetes components...
	I0127 14:29:58.603305  618007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:29:58.619130  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0127 14:29:58.619384  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0127 14:29:58.619700  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.619900  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.620429  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620455  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620610  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620627  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620955  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621103  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621621  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.621657  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.622065  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.625921  618007 addons.go:238] Setting addon default-storageclass=true in "flannel-418372"
	I0127 14:29:58.625960  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.626287  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.626338  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.642239  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0127 14:29:58.642768  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.643416  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.643445  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.643901  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.644142  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.646191  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.648095  618007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:29:58.648367  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0127 14:29:58.648707  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.649404  618007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:58.649430  618007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:29:58.649463  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.649503  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.649531  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.650223  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.650842  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.650889  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.652688  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653147  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.653172  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653365  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.653518  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.653764  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.653963  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.666548  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0127 14:29:58.666868  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.667294  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.667314  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.667561  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.667762  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.669489  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.669741  618007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:58.669755  618007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:29:58.669767  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.673157  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673667  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.673740  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673866  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.674035  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.674189  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.674352  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.812282  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:29:58.843820  618007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:29:59.006382  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:59.076837  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:59.439964  618007 node_ready.go:35] waiting up to 15m0s for node "flannel-418372" to be "Ready" ...
	I0127 14:29:59.440353  618007 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0127 14:29:59.897933  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897955  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897964  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.897979  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898296  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898314  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898325  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898333  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898451  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898464  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898472  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898480  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898484  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900207  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900218  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.900268  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900273  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900304  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911467  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.911486  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.911738  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.911762  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911766  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.913044  618007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 14:29:58.506345  619737 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 14:29:58.506539  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.506600  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.521777  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0127 14:29:58.522212  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.522764  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.522793  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.523225  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.523506  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:29:58.523719  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:29:58.523905  619737 start.go:159] libmachine.API.Create for "bridge-418372" (driver="kvm2")
	I0127 14:29:58.523931  619737 client.go:168] LocalClient.Create starting
	I0127 14:29:58.523959  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:29:58.523990  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524006  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524070  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:29:58.524089  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524100  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524128  619737 main.go:141] libmachine: Running pre-create checks...
	I0127 14:29:58.524137  619737 main.go:141] libmachine: (bridge-418372) Calling .PreCreateCheck
	I0127 14:29:58.524515  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:29:58.525026  619737 main.go:141] libmachine: Creating machine...
	I0127 14:29:58.525043  619737 main.go:141] libmachine: (bridge-418372) Calling .Create
	I0127 14:29:58.525197  619737 main.go:141] libmachine: (bridge-418372) creating KVM machine...
	I0127 14:29:58.525214  619737 main.go:141] libmachine: (bridge-418372) creating network...
	I0127 14:29:58.526633  619737 main.go:141] libmachine: (bridge-418372) DBG | found existing default KVM network
	I0127 14:29:58.528058  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.527875  619760 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:29:58.529143  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529064  619760 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:9f:16} reservation:<nil>}
	I0127 14:29:58.530053  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529980  619760 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:9b:c5} reservation:<nil>}
	I0127 14:29:58.531138  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.531066  619760 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027fa90}
	I0127 14:29:58.531168  619737 main.go:141] libmachine: (bridge-418372) DBG | created network xml: 
	I0127 14:29:58.531176  619737 main.go:141] libmachine: (bridge-418372) DBG | <network>
	I0127 14:29:58.531181  619737 main.go:141] libmachine: (bridge-418372) DBG |   <name>mk-bridge-418372</name>
	I0127 14:29:58.531190  619737 main.go:141] libmachine: (bridge-418372) DBG |   <dns enable='no'/>
	I0127 14:29:58.531197  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531211  619737 main.go:141] libmachine: (bridge-418372) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 14:29:58.531225  619737 main.go:141] libmachine: (bridge-418372) DBG |     <dhcp>
	I0127 14:29:58.531254  619737 main.go:141] libmachine: (bridge-418372) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 14:29:58.531276  619737 main.go:141] libmachine: (bridge-418372) DBG |     </dhcp>
	I0127 14:29:58.531285  619737 main.go:141] libmachine: (bridge-418372) DBG |   </ip>
	I0127 14:29:58.531292  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531300  619737 main.go:141] libmachine: (bridge-418372) DBG | </network>
	I0127 14:29:58.531309  619737 main.go:141] libmachine: (bridge-418372) DBG | 
	I0127 14:29:58.536042  619737 main.go:141] libmachine: (bridge-418372) DBG | trying to create private KVM network mk-bridge-418372 192.168.72.0/24...
	I0127 14:29:58.619397  619737 main.go:141] libmachine: (bridge-418372) DBG | private KVM network mk-bridge-418372 192.168.72.0/24 created
	I0127 14:29:58.619417  619737 main.go:141] libmachine: (bridge-418372) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:58.619428  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.619379  619760 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.619443  619737 main.go:141] libmachine: (bridge-418372) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:29:58.619522  619737 main.go:141] libmachine: (bridge-418372) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:29:58.924369  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.924221  619760 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa...
	I0127 14:29:59.184940  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184795  619760 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk...
	I0127 14:29:59.184993  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing magic tar header
	I0127 14:29:59.185009  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing SSH key tar header
	I0127 14:29:59.185032  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184949  619760 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:59.185152  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372
	I0127 14:29:59.185180  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:29:59.185194  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 (perms=drwx------)
	I0127 14:29:59.185214  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:29:59.185231  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:29:59.185244  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:29:59.185253  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:29:59.185264  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:59.185276  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:29:59.185287  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:29:59.185305  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:29:59.185319  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins
	I0127 14:29:59.185328  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.185342  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home
	I0127 14:29:59.185355  619737 main.go:141] libmachine: (bridge-418372) DBG | skipping /home - not owner
	I0127 14:29:59.186522  619737 main.go:141] libmachine: (bridge-418372) define libvirt domain using xml: 
	I0127 14:29:59.186545  619737 main.go:141] libmachine: (bridge-418372) <domain type='kvm'>
	I0127 14:29:59.186554  619737 main.go:141] libmachine: (bridge-418372)   <name>bridge-418372</name>
	I0127 14:29:59.186567  619737 main.go:141] libmachine: (bridge-418372)   <memory unit='MiB'>3072</memory>
	I0127 14:29:59.186606  619737 main.go:141] libmachine: (bridge-418372)   <vcpu>2</vcpu>
	I0127 14:29:59.186644  619737 main.go:141] libmachine: (bridge-418372)   <features>
	I0127 14:29:59.186658  619737 main.go:141] libmachine: (bridge-418372)     <acpi/>
	I0127 14:29:59.186668  619737 main.go:141] libmachine: (bridge-418372)     <apic/>
	I0127 14:29:59.186687  619737 main.go:141] libmachine: (bridge-418372)     <pae/>
	I0127 14:29:59.186697  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.186713  619737 main.go:141] libmachine: (bridge-418372)   </features>
	I0127 14:29:59.186724  619737 main.go:141] libmachine: (bridge-418372)   <cpu mode='host-passthrough'>
	I0127 14:29:59.186732  619737 main.go:141] libmachine: (bridge-418372)   
	I0127 14:29:59.186741  619737 main.go:141] libmachine: (bridge-418372)   </cpu>
	I0127 14:29:59.186749  619737 main.go:141] libmachine: (bridge-418372)   <os>
	I0127 14:29:59.186759  619737 main.go:141] libmachine: (bridge-418372)     <type>hvm</type>
	I0127 14:29:59.186771  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='cdrom'/>
	I0127 14:29:59.186781  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='hd'/>
	I0127 14:29:59.186791  619737 main.go:141] libmachine: (bridge-418372)     <bootmenu enable='no'/>
	I0127 14:29:59.186799  619737 main.go:141] libmachine: (bridge-418372)   </os>
	I0127 14:29:59.186807  619737 main.go:141] libmachine: (bridge-418372)   <devices>
	I0127 14:29:59.186816  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='cdrom'>
	I0127 14:29:59.186837  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/boot2docker.iso'/>
	I0127 14:29:59.186851  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hdc' bus='scsi'/>
	I0127 14:29:59.186860  619737 main.go:141] libmachine: (bridge-418372)       <readonly/>
	I0127 14:29:59.186869  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186884  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='disk'>
	I0127 14:29:59.186896  619737 main.go:141] libmachine: (bridge-418372)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:29:59.186909  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk'/>
	I0127 14:29:59.186919  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hda' bus='virtio'/>
	I0127 14:29:59.186925  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186931  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186939  619737 main.go:141] libmachine: (bridge-418372)       <source network='mk-bridge-418372'/>
	I0127 14:29:59.186945  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.186968  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.186980  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186989  619737 main.go:141] libmachine: (bridge-418372)       <source network='default'/>
	I0127 14:29:59.186999  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.187007  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.187016  619737 main.go:141] libmachine: (bridge-418372)     <serial type='pty'>
	I0127 14:29:59.187024  619737 main.go:141] libmachine: (bridge-418372)       <target port='0'/>
	I0127 14:29:59.187042  619737 main.go:141] libmachine: (bridge-418372)     </serial>
	I0127 14:29:59.187053  619737 main.go:141] libmachine: (bridge-418372)     <console type='pty'>
	I0127 14:29:59.187060  619737 main.go:141] libmachine: (bridge-418372)       <target type='serial' port='0'/>
	I0127 14:29:59.187070  619737 main.go:141] libmachine: (bridge-418372)     </console>
	I0127 14:29:59.187075  619737 main.go:141] libmachine: (bridge-418372)     <rng model='virtio'>
	I0127 14:29:59.187088  619737 main.go:141] libmachine: (bridge-418372)       <backend model='random'>/dev/random</backend>
	I0127 14:29:59.187099  619737 main.go:141] libmachine: (bridge-418372)     </rng>
	I0127 14:29:59.187109  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187115  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187127  619737 main.go:141] libmachine: (bridge-418372)   </devices>
	I0127 14:29:59.187133  619737 main.go:141] libmachine: (bridge-418372) </domain>
	I0127 14:29:59.187147  619737 main.go:141] libmachine: (bridge-418372) 
	I0127 14:29:59.192870  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:dc:94:4c in network default
	I0127 14:29:59.193459  619737 main.go:141] libmachine: (bridge-418372) starting domain...
	I0127 14:29:59.193498  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.193514  619737 main.go:141] libmachine: (bridge-418372) ensuring networks are active...
	I0127 14:29:59.194186  619737 main.go:141] libmachine: (bridge-418372) Ensuring network default is active
	I0127 14:29:59.194531  619737 main.go:141] libmachine: (bridge-418372) Ensuring network mk-bridge-418372 is active
	I0127 14:29:59.195173  619737 main.go:141] libmachine: (bridge-418372) getting domain XML...
	I0127 14:29:59.196009  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.603422  619737 main.go:141] libmachine: (bridge-418372) waiting for IP...
	I0127 14:29:59.604334  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.604867  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.604937  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.604872  619760 retry.go:31] will retry after 303.965936ms: waiting for domain to come up
	I0127 14:29:59.910634  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.911365  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.911395  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.911327  619760 retry.go:31] will retry after 241.006912ms: waiting for domain to come up
	I0127 14:30:00.153815  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.154372  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.154403  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.154354  619760 retry.go:31] will retry after 323.516048ms: waiting for domain to come up
	I0127 14:30:00.479917  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.480471  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.480490  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.480451  619760 retry.go:31] will retry after 577.842165ms: waiting for domain to come up
	I0127 14:30:01.059664  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.060181  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.060209  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.060153  619760 retry.go:31] will retry after 693.227243ms: waiting for domain to come up
	I0127 14:30:01.754699  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.755198  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.755231  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.755167  619760 retry.go:31] will retry after 601.644547ms: waiting for domain to come up
	I0127 14:30:02.358857  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:02.359425  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:02.359456  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:02.359398  619760 retry.go:31] will retry after 805.211831ms: waiting for domain to come up
	I0127 14:30:03.166329  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:03.166920  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:03.166954  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:03.166895  619760 retry.go:31] will retry after 1.344095834s: waiting for domain to come up
	I0127 14:29:59.914025  618007 addons.go:514] duration metric: took 1.313551088s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 14:29:59.948236  618007 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-418372" context rescaled to 1 replicas
	I0127 14:30:01.444005  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:04.513305  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:04.513804  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:04.513825  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:04.513785  619760 retry.go:31] will retry after 1.439144315s: waiting for domain to come up
	I0127 14:30:05.954624  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:05.955150  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:05.955180  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:05.955114  619760 retry.go:31] will retry after 1.897876702s: waiting for domain to come up
	I0127 14:30:07.854669  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:07.855304  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:07.855364  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:07.855289  619760 retry.go:31] will retry after 1.982634575s: waiting for domain to come up
	I0127 14:30:03.943205  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:05.944150  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:09.839318  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:09.839985  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:09.840015  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:09.839942  619760 retry.go:31] will retry after 3.383361388s: waiting for domain to come up
	I0127 14:30:13.226586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:13.227082  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:13.227161  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:13.227058  619760 retry.go:31] will retry after 3.076957623s: waiting for domain to come up
	I0127 14:30:08.444021  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:10.944599  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:16.306620  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:16.307278  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:16.307306  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:16.307257  619760 retry.go:31] will retry after 5.232439528s: waiting for domain to come up
	I0127 14:30:13.443330  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:15.943802  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.543562  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544125  619737 main.go:141] libmachine: (bridge-418372) found domain IP: 192.168.72.158
	I0127 14:30:21.544159  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has current primary IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544168  619737 main.go:141] libmachine: (bridge-418372) reserving static IP address...
	I0127 14:30:21.544584  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find host DHCP lease matching {name: "bridge-418372", mac: "52:54:00:34:a5:5b", ip: "192.168.72.158"} in network mk-bridge-418372
	I0127 14:30:21.620096  619737 main.go:141] libmachine: (bridge-418372) DBG | Getting to WaitForSSH function...
	I0127 14:30:21.620142  619737 main.go:141] libmachine: (bridge-418372) reserved static IP address 192.168.72.158 for domain bridge-418372
	I0127 14:30:21.620156  619737 main.go:141] libmachine: (bridge-418372) waiting for SSH...
	I0127 14:30:21.623062  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623569  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.623601  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623801  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH client type: external
	I0127 14:30:21.623826  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa (-rw-------)
	I0127 14:30:21.623865  619737 main.go:141] libmachine: (bridge-418372) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:30:21.623880  619737 main.go:141] libmachine: (bridge-418372) DBG | About to run SSH command:
	I0127 14:30:21.623915  619737 main.go:141] libmachine: (bridge-418372) DBG | exit 0
	I0127 14:30:21.749658  619737 main.go:141] libmachine: (bridge-418372) DBG | SSH cmd err, output: <nil>: 
	I0127 14:30:21.749918  619737 main.go:141] libmachine: (bridge-418372) KVM machine creation complete
	I0127 14:30:21.750400  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:21.750961  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751196  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751406  619737 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:30:21.751421  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:21.752834  619737 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:30:21.752851  619737 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:30:21.752859  619737 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:30:21.752883  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.755459  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.755886  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.755913  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.756091  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.756297  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756467  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756642  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.756809  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.757010  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.757020  619737 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:30:21.856846  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:21.856875  619737 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:30:21.856885  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.859711  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860096  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.860133  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860331  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.860555  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860723  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860912  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.861103  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.861357  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.861375  619737 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:30:21.966551  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:30:21.966638  619737 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:30:21.966653  619737 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:30:21.966663  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.966929  619737 buildroot.go:166] provisioning hostname "bridge-418372"
	I0127 14:30:21.966993  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.967184  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.969863  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970301  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.970330  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970473  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.970662  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970806  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970980  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.971184  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.971397  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.971411  619737 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-418372 && echo "bridge-418372" | sudo tee /etc/hostname
	I0127 14:30:22.088428  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-418372
	
	I0127 14:30:22.088472  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.091063  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091586  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.091611  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091821  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.092004  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092139  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092303  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.092514  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.092705  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.092732  619737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-418372' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-418372/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-418372' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:30:22.206493  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:22.206523  619737 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:30:22.206555  619737 buildroot.go:174] setting up certificates
	I0127 14:30:22.206570  619737 provision.go:84] configureAuth start
	I0127 14:30:22.206580  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:22.206870  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:22.209586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.209920  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.209959  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.210081  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.212164  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212510  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.212527  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212711  619737 provision.go:143] copyHostCerts
	I0127 14:30:22.212761  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:30:22.212785  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:30:22.212874  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:30:22.213016  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:30:22.213027  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:30:22.213064  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:30:22.213138  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:30:22.213146  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:30:22.213168  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:30:22.213230  619737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.bridge-418372 san=[127.0.0.1 192.168.72.158 bridge-418372 localhost minikube]
	I0127 14:30:22.548623  619737 provision.go:177] copyRemoteCerts
	I0127 14:30:22.548680  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:30:22.548706  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.551241  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551575  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.551604  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551796  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.552020  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.552246  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.552395  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:22.643890  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:30:22.670713  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:30:22.693627  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:30:22.717638  619737 provision.go:87] duration metric: took 511.05611ms to configureAuth
	I0127 14:30:22.717668  619737 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:30:22.717835  619737 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:22.717935  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.720466  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.720835  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.720865  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.721045  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.721238  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721385  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721514  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.721646  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.721822  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.721844  619737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:30:22.938113  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:30:22.938145  619737 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:30:22.938155  619737 main.go:141] libmachine: (bridge-418372) Calling .GetURL
	I0127 14:30:22.939593  619737 main.go:141] libmachine: (bridge-418372) DBG | using libvirt version 6000000
	I0127 14:30:22.942205  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942565  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.942607  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942749  619737 main.go:141] libmachine: Docker is up and running!
	I0127 14:30:22.942779  619737 main.go:141] libmachine: Reticulating splines...
	I0127 14:30:22.942791  619737 client.go:171] duration metric: took 24.418851853s to LocalClient.Create
	I0127 14:30:22.942815  619737 start.go:167] duration metric: took 24.418910733s to libmachine.API.Create "bridge-418372"
	I0127 14:30:22.942825  619737 start.go:293] postStartSetup for "bridge-418372" (driver="kvm2")
	I0127 14:30:22.942834  619737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:30:22.942854  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:22.943081  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:30:22.943104  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.945274  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945649  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.945678  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945844  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.946014  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.946145  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.946279  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.027435  619737 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:30:23.031408  619737 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:30:23.031432  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:30:23.031490  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:30:23.031589  619737 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:30:23.031684  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:30:23.041098  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:23.064771  619737 start.go:296] duration metric: took 121.935009ms for postStartSetup
	I0127 14:30:23.064822  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:23.065340  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.068126  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068566  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.068585  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068850  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:30:23.069082  619737 start.go:128] duration metric: took 24.564244155s to createHost
	I0127 14:30:23.069112  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.071565  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.071930  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.071958  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.072093  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.072294  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072485  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072602  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.072779  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:23.072928  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:23.072937  619737 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:30:23.173863  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737988223.150041878
	
	I0127 14:30:23.173884  619737 fix.go:216] guest clock: 1737988223.150041878
	I0127 14:30:23.173890  619737 fix.go:229] Guest: 2025-01-27 14:30:23.150041878 +0000 UTC Remote: 2025-01-27 14:30:23.069097778 +0000 UTC m=+24.679552593 (delta=80.9441ms)
	I0127 14:30:23.173936  619737 fix.go:200] guest clock delta is within tolerance: 80.9441ms
	I0127 14:30:23.173948  619737 start.go:83] releasing machines lock for "bridge-418372", held for 24.669221959s
	I0127 14:30:23.173973  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.174207  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.176840  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177209  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.177240  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177413  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.177905  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178089  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178172  619737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:30:23.178218  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.178318  619737 ssh_runner.go:195] Run: cat /version.json
	I0127 14:30:23.178350  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.181082  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181120  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181443  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181470  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181496  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181513  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181567  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181734  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.181816  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181907  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.181974  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.182052  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.182110  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.182209  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.254783  619737 ssh_runner.go:195] Run: systemctl --version
	I0127 14:30:23.277936  619737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:30:18.443736  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:20.942676  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.452564  618007 node_ready.go:49] node "flannel-418372" has status "Ready":"True"
	I0127 14:30:21.452591  618007 node_ready.go:38] duration metric: took 22.012579891s for node "flannel-418372" to be "Ready" ...
	I0127 14:30:21.452602  618007 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:21.461767  618007 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:23.436466  619737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:30:23.443141  619737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:30:23.443197  619737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:30:23.460545  619737 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:30:23.460567  619737 start.go:495] detecting cgroup driver to use...
	I0127 14:30:23.460628  619737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:30:23.479133  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:30:23.494546  619737 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:30:23.494614  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:30:23.508408  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:30:23.521348  619737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:30:23.635456  619737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:30:23.765321  619737 docker.go:233] disabling docker service ...
	I0127 14:30:23.765393  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:30:23.778859  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:30:23.790920  619737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:30:23.924634  619737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:30:24.053414  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:30:24.066957  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:30:24.085971  619737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:30:24.086040  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.096202  619737 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:30:24.096256  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.106388  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.116650  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.127369  619737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:30:24.137556  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.147564  619737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.166019  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.176231  619737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:30:24.185246  619737 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:30:24.185296  619737 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:30:24.198571  619737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:30:24.207701  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:24.326803  619737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:30:24.416087  619737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:30:24.416166  619737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:30:24.421135  619737 start.go:563] Will wait 60s for crictl version
	I0127 14:30:24.421191  619737 ssh_runner.go:195] Run: which crictl
	I0127 14:30:24.425096  619737 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:30:24.467553  619737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:30:24.467656  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.494858  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.523951  619737 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:30:24.525015  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:24.527690  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528062  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:24.528102  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528378  619737 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:30:24.532290  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:24.545520  619737 kubeadm.go:883] updating cluster {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:30:24.545653  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:30:24.545722  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:24.578117  619737 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:30:24.578183  619737 ssh_runner.go:195] Run: which lz4
	I0127 14:30:24.581940  619737 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:30:24.585899  619737 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:30:24.585926  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:30:26.046393  619737 crio.go:462] duration metric: took 1.464480043s to copy over tarball
	I0127 14:30:26.046476  619737 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:30:28.286060  619737 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.239526518s)
	I0127 14:30:28.286090  619737 crio.go:469] duration metric: took 2.239666444s to extract the tarball
	I0127 14:30:28.286098  619737 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:30:28.329925  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:28.372463  619737 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:30:28.372493  619737 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:30:28.372506  619737 kubeadm.go:934] updating node { 192.168.72.158 8443 v1.32.1 crio true true} ...
	I0127 14:30:28.372639  619737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-418372 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 14:30:28.372730  619737 ssh_runner.go:195] Run: crio config
	I0127 14:30:23.469182  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:25.470378  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:27.969278  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:28.431389  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:30:28.431419  619737 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:30:28.431445  619737 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-418372 NodeName:bridge-418372 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:30:28.431596  619737 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-418372"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:30:28.431664  619737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:30:28.443712  619737 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:30:28.443775  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:30:28.453106  619737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:30:28.472323  619737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:30:28.488568  619737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:30:28.505501  619737 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0127 14:30:28.509628  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:28.522026  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:28.644859  619737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:30:28.660903  619737 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372 for IP: 192.168.72.158
	I0127 14:30:28.660924  619737 certs.go:194] generating shared ca certs ...
	I0127 14:30:28.660945  619737 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.661145  619737 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:30:28.661204  619737 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:30:28.661218  619737 certs.go:256] generating profile certs ...
	I0127 14:30:28.661295  619737 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key
	I0127 14:30:28.661316  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt with IP's: []
	I0127 14:30:28.906551  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt ...
	I0127 14:30:28.906578  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: {Name:mk1e2537950485aa8b4f79c1832edd87a69fac76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906770  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key ...
	I0127 14:30:28.906787  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key: {Name:mkefc91979c182951e8440280201021e6feaf0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906903  619737 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b
	I0127 14:30:28.906926  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.158]
	I0127 14:30:29.091201  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b ...
	I0127 14:30:29.091235  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b: {Name:mkd8eb8b7ce81ecb1ea18b8612606f856d364bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091400  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b ...
	I0127 14:30:29.091415  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b: {Name:mk69a1ca35d981f975238e5836687217bd190f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091489  619737 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt
	I0127 14:30:29.091560  619737 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key
	I0127 14:30:29.091639  619737 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key
	I0127 14:30:29.091657  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt with IP's: []
	I0127 14:30:29.149860  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt ...
	I0127 14:30:29.149879  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt: {Name:mk7035d438a8cb1c492fb958853882394afbe27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.149993  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key ...
	I0127 14:30:29.150004  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key: {Name:mka8c6fd9acdaec459c9ef3e4dfbb4b5c5547317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.150161  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:30:29.150202  619737 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:30:29.150212  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:30:29.150232  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:30:29.150253  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:30:29.150272  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:30:29.150313  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:29.150944  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:30:29.175883  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:30:29.199205  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:30:29.222754  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:30:29.245909  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:30:29.269824  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:30:29.292470  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:30:29.315043  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:30:29.354655  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:30:29.383756  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:30:29.416181  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:30:29.439715  619737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:30:29.456721  619737 ssh_runner.go:195] Run: openssl version
	I0127 14:30:29.464239  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:30:29.475723  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480470  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480515  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.486322  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:30:29.496846  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:30:29.507085  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511703  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511754  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.517449  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:30:29.527666  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:30:29.540074  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544916  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544955  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.551000  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:30:29.562167  619737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:30:29.566616  619737 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:30:29.566681  619737 kubeadm.go:392] StartCluster: {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:30:29.566758  619737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:30:29.566808  619737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:30:29.609003  619737 cri.go:89] found id: ""
	I0127 14:30:29.609076  619737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:30:29.618951  619737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:30:29.628562  619737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:30:29.637724  619737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:30:29.637742  619737 kubeadm.go:157] found existing configuration files:
	
	I0127 14:30:29.637782  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:30:29.648947  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:30:29.648987  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:30:29.657991  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:30:29.666526  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:30:29.666559  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:30:29.676483  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.685024  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:30:29.685073  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.693937  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:30:29.702972  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:30:29.703020  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:30:29.712304  619737 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:30:29.774803  619737 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:30:29.774988  619737 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:30:29.875816  619737 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:30:29.875979  619737 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:30:29.876114  619737 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:30:29.888173  619737 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:30:29.945220  619737 out.go:235]   - Generating certificates and keys ...
	I0127 14:30:29.945359  619737 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:30:29.945448  619737 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:30:30.158542  619737 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:30:30.651792  619737 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:30:30.728655  619737 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:30:30.849544  619737 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:30:31.081949  619737 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:30:31.082098  619737 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.339755  619737 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:30:31.339980  619737 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.556885  619737 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:30:31.958984  619737 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:30:32.398271  619737 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:30:32.398452  619737 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:30:32.525025  619737 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:30:32.699085  619737 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:30:33.067374  619737 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:30:33.229761  619737 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:30:30.074789  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:32.468447  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:33.740325  619737 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:30:33.741768  619737 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:30:33.745759  619737 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.348919928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988237348881511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5a8c467-d5c6-4e46-a826-cbd4b6bd27cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.349793882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33591036-41b9-43c3-9972-119482ab4569 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.349869626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33591036-41b9-43c3-9972-119482ab4569 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.349918958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33591036-41b9-43c3-9972-119482ab4569 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.390661731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd068968-5d94-4a5a-ba08-2338cfee9549 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.390772272Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd068968-5d94-4a5a-ba08-2338cfee9549 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.392020853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=138ec1a8-1c4e-4911-922c-53bc783420da name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.392717724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988237392681629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=138ec1a8-1c4e-4911-922c-53bc783420da name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.393586302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2f05bdb-a60a-49e8-869f-c01f038f179e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.393748335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2f05bdb-a60a-49e8-869f-c01f038f179e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.393866982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b2f05bdb-a60a-49e8-869f-c01f038f179e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.433255216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a5381e0-51d7-47a1-93e1-1c124d443b26 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.433366297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a5381e0-51d7-47a1-93e1-1c124d443b26 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.434777478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4cf2a9b-dd51-40a1-b7c2-9a5f405200a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.435311952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988237435274435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4cf2a9b-dd51-40a1-b7c2-9a5f405200a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.436062524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83a45208-c81d-4cea-b3a5-26e5164f871f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.436134650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83a45208-c81d-4cea-b3a5-26e5164f871f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.436182727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=83a45208-c81d-4cea-b3a5-26e5164f871f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.479134284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8719ec66-07aa-47c8-bc91-548f7fb3f9b8 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.479241230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8719ec66-07aa-47c8-bc91-548f7fb3f9b8 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.480200252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a6cbb12-a9e6-4531-b40b-f0e2fff76d2c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.480751446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988237480720187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a6cbb12-a9e6-4531-b40b-f0e2fff76d2c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.481406681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86dff0d3-951c-41af-9baf-235d0f24fff6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.481506926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86dff0d3-951c-41af-9baf-235d0f24fff6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:30:37 old-k8s-version-456130 crio[624]: time="2025-01-27 14:30:37.481550684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=86dff0d3-951c-41af-9baf-235d0f24fff6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 14:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051913] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040598] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.061734] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.852778] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.633968] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.949773] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.054597] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055817] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.196544] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.123926] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.248912] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.669224] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.071494] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.203893] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +13.783962] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 14:17] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Jan27 14:19] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.080632] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:30:37 up 17 min,  0 users,  load average: 0.05, 0.05, 0.05
	Linux old-k8s-version-456130 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: created by net/http.(*Transport).queueForDial
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: goroutine 137 [runnable]:
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008bd180)
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: goroutine 138 [select]:
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000e22000, 0xc000dd1801, 0xc000d89c00, 0xc000dd7be0, 0xc000d9d200, 0xc000d9d1c0)
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000dd18c0, 0x0, 0x0)
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008bd180)
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 14:30:32 old-k8s-version-456130 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 14:30:33 old-k8s-version-456130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Jan 27 14:30:33 old-k8s-version-456130 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 14:30:33 old-k8s-version-456130 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 14:30:33 old-k8s-version-456130 kubelet[6545]: I0127 14:30:33.642053    6545 server.go:416] Version: v1.20.0
	Jan 27 14:30:33 old-k8s-version-456130 kubelet[6545]: I0127 14:30:33.642315    6545 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 14:30:33 old-k8s-version-456130 kubelet[6545]: I0127 14:30:33.645198    6545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 14:30:33 old-k8s-version-456130 kubelet[6545]: W0127 14:30:33.646579    6545 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 14:30:33 old-k8s-version-456130 kubelet[6545]: I0127 14:30:33.647027    6545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (240.974994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-456130" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (391.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:39.190691  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.197113  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.208544  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.229973  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.271424  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.352842  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.514572  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:39.836345  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:40.478118  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:41.759589  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:44.321530  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:30:49.442867  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:06.355835  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:13.001338  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.007698  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.019039  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:13.041052  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.082437  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.164645  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.325962  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:31:13.647601  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:14.289201  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:15.570803  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:18.132697  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:20.166562  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:23.254107  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:33.496131  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:31:53.977881  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:32:01.128809  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:32:28.278056  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:32:34.939952  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:00.960331  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:00.966732  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:00.978110  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:00.999496  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:01.041515  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:01.122903  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:01.284401  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:01.605796  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:02.247431  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:03.529190  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:06.090513  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:11.211879  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:21.453754  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:23.050795  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:28.673198  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:37.513460  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:41.935900  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:43.284078  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.290453  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.301772  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.323104  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.364481  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.445887  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.607343  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:33:43.929022  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:44.570940  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:45.853198  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:48.415571  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:53.536824  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:33:56.862090  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:03.778967  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:22.898235  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:24.260463  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:27.334997  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:32.666963  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.673305  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.684595  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.705876  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.747236  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.828579  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:32.990128  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:33.311661  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:34:33.953713  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:35.235921  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:37.797634  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:42.919943  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:44.413439  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:34:53.161944  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:05.222769  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:12.119630  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/auto-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:13.643688  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:34.434930  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:37.448394  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.454787  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.466107  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.487433  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.528744  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.610111  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:37.771583  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:38.093134  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:38.735316  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:39.190801  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:40.016976  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:42.579121  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:44.819938  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/calico-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:47.700715  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:50.402300  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:54.605238  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/enable-default-cni-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:55.346152  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.352494  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.363833  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.385132  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.426438  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.507793  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.669282  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:55.990961  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:56.632737  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:35:57.914133  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:57.942549  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:00.475478  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:05.597475  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:06.892926  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:13.001785  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:15.839017  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:18.424158  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:27.144108  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/custom-flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:36.320342  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:40.703697  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/kindnet-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
E0127 14:36:59.386002  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/flannel-418372/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.11:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (229.691131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-456130" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-456130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-456130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.016µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-456130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (216.363738ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-456130 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-418372 sudo iptables                       | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo docker                         | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo cat                            | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo                                | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo find                           | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-418372 sudo crio                           | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-418372                                     | bridge-418372 | jenkins | v1.35.0 | 27 Jan 25 14:31 UTC | 27 Jan 25 14:31 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:29:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:29:58.428259  619737 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:29:58.428355  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428363  619737 out.go:358] Setting ErrFile to fd 2...
	I0127 14:29:58.428369  619737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:29:58.428556  619737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:29:58.429178  619737 out.go:352] Setting JSON to false
	I0127 14:29:58.430355  619737 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":18743,"bootTime":1737969455,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:29:58.430472  619737 start.go:139] virtualization: kvm guest
	I0127 14:29:58.432328  619737 out.go:177] * [bridge-418372] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:29:58.433847  619737 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:29:58.433841  619737 notify.go:220] Checking for updates...
	I0127 14:29:58.435064  619737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:29:58.436272  619737 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.437495  619737 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.438658  619737 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:29:54.794135  618007 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 14:29:54.800129  618007 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 14:29:54.800149  618007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0127 14:29:54.827977  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 14:29:55.354721  618007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:29:55.354799  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.354815  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-418372 minikube.k8s.io/updated_at=2025_01_27T14_29_55_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=flannel-418372 minikube.k8s.io/primary=true
	I0127 14:29:55.498477  618007 ops.go:34] apiserver oom_adj: -16
	I0127 14:29:55.498561  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:55.998885  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.499532  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:56.998893  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.499229  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:57.999484  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.440406  619737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:29:58.442063  619737 config.go:182] Loaded profile config "embed-certs-742142": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442183  619737 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.442310  619737 config.go:182] Loaded profile config "old-k8s-version-456130": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:29:58.442439  619737 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:29:58.481913  619737 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:29:58.482984  619737 start.go:297] selected driver: kvm2
	I0127 14:29:58.482999  619737 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:29:58.483014  619737 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:29:58.483732  619737 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.483833  619737 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:29:58.500677  619737 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:29:58.500725  619737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:29:58.501048  619737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:29:58.501095  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:29:58.501112  619737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:29:58.501223  619737 start.go:340] cluster config:
	{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0127 14:29:58.501374  619737 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:29:58.502978  619737 out.go:177] * Starting "bridge-418372" primary control-plane node in "bridge-418372" cluster
	I0127 14:29:58.504138  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:29:58.504185  619737 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:29:58.504199  619737 cache.go:56] Caching tarball of preloaded images
	I0127 14:29:58.504311  619737 preload.go:172] Found /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:29:58.504327  619737 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:29:58.504450  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:29:58.504481  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json: {Name:mk097cf8466e36fa95d1648a8e56c4a0cdde1a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.504659  619737 start.go:360] acquireMachinesLock for bridge-418372: {Name:mk6d38fa09fa24cd3c414dc7ae5daeed893565a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:29:58.504713  619737 start.go:364] duration metric: took 30.62µs to acquireMachinesLock for "bridge-418372"
	I0127 14:29:58.504739  619737 start.go:93] Provisioning new machine with config: &{Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.504825  619737 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:29:58.499356  618007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:29:58.598508  618007 kubeadm.go:1113] duration metric: took 3.243774581s to wait for elevateKubeSystemPrivileges
	I0127 14:29:58.598548  618007 kubeadm.go:394] duration metric: took 14.302797004s to StartCluster
	I0127 14:29:58.598576  618007 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.598660  618007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:29:58.600178  618007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:29:58.600419  618007 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:29:58.600467  618007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:29:58.600563  618007 addons.go:69] Setting storage-provisioner=true in profile "flannel-418372"
	I0127 14:29:58.600580  618007 addons.go:238] Setting addon storage-provisioner=true in "flannel-418372"
	I0127 14:29:58.600619  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.600452  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:29:58.600644  618007 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:29:58.600634  618007 addons.go:69] Setting default-storageclass=true in profile "flannel-418372"
	I0127 14:29:58.600706  618007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-418372"
	I0127 14:29:58.601115  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601158  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.601205  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.601251  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.602065  618007 out.go:177] * Verifying Kubernetes components...
	I0127 14:29:58.603305  618007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:29:58.619130  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0127 14:29:58.619384  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0127 14:29:58.619700  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.619900  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.620429  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620455  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620610  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.620627  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.620955  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621103  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.621621  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.621657  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.622065  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.625921  618007 addons.go:238] Setting addon default-storageclass=true in "flannel-418372"
	I0127 14:29:58.625960  618007 host.go:66] Checking if "flannel-418372" exists ...
	I0127 14:29:58.626287  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.626338  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.642239  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0127 14:29:58.642768  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.643416  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.643445  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.643901  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.644142  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.646191  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.648095  618007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:29:58.648367  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0127 14:29:58.648707  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.649404  618007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:58.649430  618007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:29:58.649463  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.649503  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.649531  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.650223  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.650842  618007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.650889  618007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.652688  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653147  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.653172  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.653365  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.653518  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.653764  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.653963  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.666548  618007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0127 14:29:58.666868  618007 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.667294  618007 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.667314  618007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.667561  618007 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.667762  618007 main.go:141] libmachine: (flannel-418372) Calling .GetState
	I0127 14:29:58.669489  618007 main.go:141] libmachine: (flannel-418372) Calling .DriverName
	I0127 14:29:58.669741  618007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:58.669755  618007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:29:58.669767  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHHostname
	I0127 14:29:58.673157  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673667  618007 main.go:141] libmachine: (flannel-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:3b:a4", ip: ""} in network mk-flannel-418372: {Iface:virbr4 ExpiryTime:2025-01-27 15:29:29 +0000 UTC Type:0 Mac:52:54:00:b3:3b:a4 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:flannel-418372 Clientid:01:52:54:00:b3:3b:a4}
	I0127 14:29:58.673740  618007 main.go:141] libmachine: (flannel-418372) DBG | domain flannel-418372 has defined IP address 192.168.50.236 and MAC address 52:54:00:b3:3b:a4 in network mk-flannel-418372
	I0127 14:29:58.673866  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHPort
	I0127 14:29:58.674035  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHKeyPath
	I0127 14:29:58.674189  618007 main.go:141] libmachine: (flannel-418372) Calling .GetSSHUsername
	I0127 14:29:58.674352  618007 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/flannel-418372/id_rsa Username:docker}
	I0127 14:29:58.812282  618007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:29:58.843820  618007 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:29:59.006382  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:29:59.076837  618007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:29:59.439964  618007 node_ready.go:35] waiting up to 15m0s for node "flannel-418372" to be "Ready" ...
	I0127 14:29:59.440353  618007 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0127 14:29:59.897933  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897955  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.897964  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.897979  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898296  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898314  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898325  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898333  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898451  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.898464  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.898472  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.898480  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.898484  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900207  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900218  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.900268  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.900273  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.900304  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911467  618007 main.go:141] libmachine: Making call to close driver server
	I0127 14:29:59.911486  618007 main.go:141] libmachine: (flannel-418372) Calling .Close
	I0127 14:29:59.911738  618007 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:29:59.911762  618007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:29:59.911766  618007 main.go:141] libmachine: (flannel-418372) DBG | Closing plugin on server side
	I0127 14:29:59.913044  618007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 14:29:58.506345  619737 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 14:29:58.506539  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:29:58.506600  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:29:58.521777  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0127 14:29:58.522212  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:29:58.522764  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:29:58.522793  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:29:58.523225  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:29:58.523506  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:29:58.523719  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:29:58.523905  619737 start.go:159] libmachine.API.Create for "bridge-418372" (driver="kvm2")
	I0127 14:29:58.523931  619737 client.go:168] LocalClient.Create starting
	I0127 14:29:58.523959  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem
	I0127 14:29:58.523990  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524006  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524070  619737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem
	I0127 14:29:58.524089  619737 main.go:141] libmachine: Decoding PEM data...
	I0127 14:29:58.524100  619737 main.go:141] libmachine: Parsing certificate...
	I0127 14:29:58.524128  619737 main.go:141] libmachine: Running pre-create checks...
	I0127 14:29:58.524137  619737 main.go:141] libmachine: (bridge-418372) Calling .PreCreateCheck
	I0127 14:29:58.524515  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:29:58.525026  619737 main.go:141] libmachine: Creating machine...
	I0127 14:29:58.525043  619737 main.go:141] libmachine: (bridge-418372) Calling .Create
	I0127 14:29:58.525197  619737 main.go:141] libmachine: (bridge-418372) creating KVM machine...
	I0127 14:29:58.525214  619737 main.go:141] libmachine: (bridge-418372) creating network...
	I0127 14:29:58.526633  619737 main.go:141] libmachine: (bridge-418372) DBG | found existing default KVM network
	I0127 14:29:58.528058  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.527875  619760 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:6c:da} reservation:<nil>}
	I0127 14:29:58.529143  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529064  619760 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:9f:16} reservation:<nil>}
	I0127 14:29:58.530053  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.529980  619760 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:9b:c5} reservation:<nil>}
	I0127 14:29:58.531138  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.531066  619760 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027fa90}
	I0127 14:29:58.531168  619737 main.go:141] libmachine: (bridge-418372) DBG | created network xml: 
	I0127 14:29:58.531176  619737 main.go:141] libmachine: (bridge-418372) DBG | <network>
	I0127 14:29:58.531181  619737 main.go:141] libmachine: (bridge-418372) DBG |   <name>mk-bridge-418372</name>
	I0127 14:29:58.531190  619737 main.go:141] libmachine: (bridge-418372) DBG |   <dns enable='no'/>
	I0127 14:29:58.531197  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531211  619737 main.go:141] libmachine: (bridge-418372) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 14:29:58.531225  619737 main.go:141] libmachine: (bridge-418372) DBG |     <dhcp>
	I0127 14:29:58.531254  619737 main.go:141] libmachine: (bridge-418372) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 14:29:58.531276  619737 main.go:141] libmachine: (bridge-418372) DBG |     </dhcp>
	I0127 14:29:58.531285  619737 main.go:141] libmachine: (bridge-418372) DBG |   </ip>
	I0127 14:29:58.531292  619737 main.go:141] libmachine: (bridge-418372) DBG |   
	I0127 14:29:58.531300  619737 main.go:141] libmachine: (bridge-418372) DBG | </network>
	I0127 14:29:58.531309  619737 main.go:141] libmachine: (bridge-418372) DBG | 
	I0127 14:29:58.536042  619737 main.go:141] libmachine: (bridge-418372) DBG | trying to create private KVM network mk-bridge-418372 192.168.72.0/24...
	I0127 14:29:58.619397  619737 main.go:141] libmachine: (bridge-418372) DBG | private KVM network mk-bridge-418372 192.168.72.0/24 created
	I0127 14:29:58.619417  619737 main.go:141] libmachine: (bridge-418372) setting up store path in /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:58.619428  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.619379  619760 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:58.619443  619737 main.go:141] libmachine: (bridge-418372) building disk image from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:29:58.619522  619737 main.go:141] libmachine: (bridge-418372) Downloading /home/jenkins/minikube-integration/20327-555419/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:29:58.924369  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:58.924221  619760 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa...
	I0127 14:29:59.184940  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184795  619760 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk...
	I0127 14:29:59.184993  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing magic tar header
	I0127 14:29:59.185009  619737 main.go:141] libmachine: (bridge-418372) DBG | Writing SSH key tar header
	I0127 14:29:59.185032  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.184949  619760 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 ...
	I0127 14:29:59.185152  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372
	I0127 14:29:59.185180  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube/machines
	I0127 14:29:59.185194  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372 (perms=drwx------)
	I0127 14:29:59.185214  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:29:59.185231  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419/.minikube (perms=drwxr-xr-x)
	I0127 14:29:59.185244  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration/20327-555419 (perms=drwxrwxr-x)
	I0127 14:29:59.185253  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:29:59.185264  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:29:59.185276  619737 main.go:141] libmachine: (bridge-418372) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:29:59.185287  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20327-555419
	I0127 14:29:59.185305  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:29:59.185319  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home/jenkins
	I0127 14:29:59.185328  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.185342  619737 main.go:141] libmachine: (bridge-418372) DBG | checking permissions on dir: /home
	I0127 14:29:59.185355  619737 main.go:141] libmachine: (bridge-418372) DBG | skipping /home - not owner
	I0127 14:29:59.186522  619737 main.go:141] libmachine: (bridge-418372) define libvirt domain using xml: 
	I0127 14:29:59.186545  619737 main.go:141] libmachine: (bridge-418372) <domain type='kvm'>
	I0127 14:29:59.186554  619737 main.go:141] libmachine: (bridge-418372)   <name>bridge-418372</name>
	I0127 14:29:59.186567  619737 main.go:141] libmachine: (bridge-418372)   <memory unit='MiB'>3072</memory>
	I0127 14:29:59.186606  619737 main.go:141] libmachine: (bridge-418372)   <vcpu>2</vcpu>
	I0127 14:29:59.186644  619737 main.go:141] libmachine: (bridge-418372)   <features>
	I0127 14:29:59.186658  619737 main.go:141] libmachine: (bridge-418372)     <acpi/>
	I0127 14:29:59.186668  619737 main.go:141] libmachine: (bridge-418372)     <apic/>
	I0127 14:29:59.186687  619737 main.go:141] libmachine: (bridge-418372)     <pae/>
	I0127 14:29:59.186697  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.186713  619737 main.go:141] libmachine: (bridge-418372)   </features>
	I0127 14:29:59.186724  619737 main.go:141] libmachine: (bridge-418372)   <cpu mode='host-passthrough'>
	I0127 14:29:59.186732  619737 main.go:141] libmachine: (bridge-418372)   
	I0127 14:29:59.186741  619737 main.go:141] libmachine: (bridge-418372)   </cpu>
	I0127 14:29:59.186749  619737 main.go:141] libmachine: (bridge-418372)   <os>
	I0127 14:29:59.186759  619737 main.go:141] libmachine: (bridge-418372)     <type>hvm</type>
	I0127 14:29:59.186771  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='cdrom'/>
	I0127 14:29:59.186781  619737 main.go:141] libmachine: (bridge-418372)     <boot dev='hd'/>
	I0127 14:29:59.186791  619737 main.go:141] libmachine: (bridge-418372)     <bootmenu enable='no'/>
	I0127 14:29:59.186799  619737 main.go:141] libmachine: (bridge-418372)   </os>
	I0127 14:29:59.186807  619737 main.go:141] libmachine: (bridge-418372)   <devices>
	I0127 14:29:59.186816  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='cdrom'>
	I0127 14:29:59.186837  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/boot2docker.iso'/>
	I0127 14:29:59.186851  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hdc' bus='scsi'/>
	I0127 14:29:59.186860  619737 main.go:141] libmachine: (bridge-418372)       <readonly/>
	I0127 14:29:59.186869  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186884  619737 main.go:141] libmachine: (bridge-418372)     <disk type='file' device='disk'>
	I0127 14:29:59.186896  619737 main.go:141] libmachine: (bridge-418372)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:29:59.186909  619737 main.go:141] libmachine: (bridge-418372)       <source file='/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/bridge-418372.rawdisk'/>
	I0127 14:29:59.186919  619737 main.go:141] libmachine: (bridge-418372)       <target dev='hda' bus='virtio'/>
	I0127 14:29:59.186925  619737 main.go:141] libmachine: (bridge-418372)     </disk>
	I0127 14:29:59.186931  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186939  619737 main.go:141] libmachine: (bridge-418372)       <source network='mk-bridge-418372'/>
	I0127 14:29:59.186945  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.186968  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.186980  619737 main.go:141] libmachine: (bridge-418372)     <interface type='network'>
	I0127 14:29:59.186989  619737 main.go:141] libmachine: (bridge-418372)       <source network='default'/>
	I0127 14:29:59.186999  619737 main.go:141] libmachine: (bridge-418372)       <model type='virtio'/>
	I0127 14:29:59.187007  619737 main.go:141] libmachine: (bridge-418372)     </interface>
	I0127 14:29:59.187016  619737 main.go:141] libmachine: (bridge-418372)     <serial type='pty'>
	I0127 14:29:59.187024  619737 main.go:141] libmachine: (bridge-418372)       <target port='0'/>
	I0127 14:29:59.187042  619737 main.go:141] libmachine: (bridge-418372)     </serial>
	I0127 14:29:59.187053  619737 main.go:141] libmachine: (bridge-418372)     <console type='pty'>
	I0127 14:29:59.187060  619737 main.go:141] libmachine: (bridge-418372)       <target type='serial' port='0'/>
	I0127 14:29:59.187070  619737 main.go:141] libmachine: (bridge-418372)     </console>
	I0127 14:29:59.187075  619737 main.go:141] libmachine: (bridge-418372)     <rng model='virtio'>
	I0127 14:29:59.187088  619737 main.go:141] libmachine: (bridge-418372)       <backend model='random'>/dev/random</backend>
	I0127 14:29:59.187099  619737 main.go:141] libmachine: (bridge-418372)     </rng>
	I0127 14:29:59.187109  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187115  619737 main.go:141] libmachine: (bridge-418372)     
	I0127 14:29:59.187127  619737 main.go:141] libmachine: (bridge-418372)   </devices>
	I0127 14:29:59.187133  619737 main.go:141] libmachine: (bridge-418372) </domain>
	I0127 14:29:59.187147  619737 main.go:141] libmachine: (bridge-418372) 
	I0127 14:29:59.192870  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:dc:94:4c in network default
	I0127 14:29:59.193459  619737 main.go:141] libmachine: (bridge-418372) starting domain...
	I0127 14:29:59.193498  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.193514  619737 main.go:141] libmachine: (bridge-418372) ensuring networks are active...
	I0127 14:29:59.194186  619737 main.go:141] libmachine: (bridge-418372) Ensuring network default is active
	I0127 14:29:59.194531  619737 main.go:141] libmachine: (bridge-418372) Ensuring network mk-bridge-418372 is active
	I0127 14:29:59.195173  619737 main.go:141] libmachine: (bridge-418372) getting domain XML...
	I0127 14:29:59.196009  619737 main.go:141] libmachine: (bridge-418372) creating domain...
	I0127 14:29:59.603422  619737 main.go:141] libmachine: (bridge-418372) waiting for IP...
	I0127 14:29:59.604334  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.604867  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.604937  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.604872  619760 retry.go:31] will retry after 303.965936ms: waiting for domain to come up
	I0127 14:29:59.910634  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:29:59.911365  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:29:59.911395  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:29:59.911327  619760 retry.go:31] will retry after 241.006912ms: waiting for domain to come up
	I0127 14:30:00.153815  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.154372  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.154403  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.154354  619760 retry.go:31] will retry after 323.516048ms: waiting for domain to come up
	I0127 14:30:00.479917  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:00.480471  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:00.480490  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:00.480451  619760 retry.go:31] will retry after 577.842165ms: waiting for domain to come up
	I0127 14:30:01.059664  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.060181  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.060209  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.060153  619760 retry.go:31] will retry after 693.227243ms: waiting for domain to come up
	I0127 14:30:01.754699  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:01.755198  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:01.755231  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:01.755167  619760 retry.go:31] will retry after 601.644547ms: waiting for domain to come up
	I0127 14:30:02.358857  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:02.359425  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:02.359456  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:02.359398  619760 retry.go:31] will retry after 805.211831ms: waiting for domain to come up
	I0127 14:30:03.166329  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:03.166920  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:03.166954  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:03.166895  619760 retry.go:31] will retry after 1.344095834s: waiting for domain to come up
	I0127 14:29:59.914025  618007 addons.go:514] duration metric: took 1.313551088s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 14:29:59.948236  618007 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-418372" context rescaled to 1 replicas
	I0127 14:30:01.444005  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:04.513305  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:04.513804  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:04.513825  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:04.513785  619760 retry.go:31] will retry after 1.439144315s: waiting for domain to come up
	I0127 14:30:05.954624  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:05.955150  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:05.955180  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:05.955114  619760 retry.go:31] will retry after 1.897876702s: waiting for domain to come up
	I0127 14:30:07.854669  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:07.855304  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:07.855364  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:07.855289  619760 retry.go:31] will retry after 1.982634575s: waiting for domain to come up
	I0127 14:30:03.943205  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:05.944150  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:09.839318  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:09.839985  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:09.840015  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:09.839942  619760 retry.go:31] will retry after 3.383361388s: waiting for domain to come up
	I0127 14:30:13.226586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:13.227082  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:13.227161  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:13.227058  619760 retry.go:31] will retry after 3.076957623s: waiting for domain to come up
	I0127 14:30:08.444021  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:10.944599  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:16.306620  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:16.307278  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find current IP address of domain bridge-418372 in network mk-bridge-418372
	I0127 14:30:16.307306  619737 main.go:141] libmachine: (bridge-418372) DBG | I0127 14:30:16.307257  619760 retry.go:31] will retry after 5.232439528s: waiting for domain to come up
	I0127 14:30:13.443330  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:15.943802  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.543562  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544125  619737 main.go:141] libmachine: (bridge-418372) found domain IP: 192.168.72.158
	I0127 14:30:21.544159  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has current primary IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.544168  619737 main.go:141] libmachine: (bridge-418372) reserving static IP address...
	I0127 14:30:21.544584  619737 main.go:141] libmachine: (bridge-418372) DBG | unable to find host DHCP lease matching {name: "bridge-418372", mac: "52:54:00:34:a5:5b", ip: "192.168.72.158"} in network mk-bridge-418372
	I0127 14:30:21.620096  619737 main.go:141] libmachine: (bridge-418372) DBG | Getting to WaitForSSH function...
	I0127 14:30:21.620142  619737 main.go:141] libmachine: (bridge-418372) reserved static IP address 192.168.72.158 for domain bridge-418372
	I0127 14:30:21.620156  619737 main.go:141] libmachine: (bridge-418372) waiting for SSH...
	I0127 14:30:21.623062  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623569  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.623601  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.623801  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH client type: external
	I0127 14:30:21.623826  619737 main.go:141] libmachine: (bridge-418372) DBG | Using SSH private key: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa (-rw-------)
	I0127 14:30:21.623865  619737 main.go:141] libmachine: (bridge-418372) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:30:21.623880  619737 main.go:141] libmachine: (bridge-418372) DBG | About to run SSH command:
	I0127 14:30:21.623915  619737 main.go:141] libmachine: (bridge-418372) DBG | exit 0
	I0127 14:30:21.749658  619737 main.go:141] libmachine: (bridge-418372) DBG | SSH cmd err, output: <nil>: 
	I0127 14:30:21.749918  619737 main.go:141] libmachine: (bridge-418372) KVM machine creation complete
	I0127 14:30:21.750400  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:21.750961  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751196  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:21.751406  619737 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:30:21.751421  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:21.752834  619737 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:30:21.752851  619737 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:30:21.752859  619737 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:30:21.752883  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.755459  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.755886  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.755913  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.756091  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.756297  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756467  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.756642  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.756809  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.757010  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.757020  619737 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:30:21.856846  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:21.856875  619737 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:30:21.856885  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.859711  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860096  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.860133  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.860331  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.860555  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860723  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.860912  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.861103  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.861357  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.861375  619737 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:30:21.966551  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:30:21.966638  619737 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:30:21.966653  619737 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:30:21.966663  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.966929  619737 buildroot.go:166] provisioning hostname "bridge-418372"
	I0127 14:30:21.966993  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:21.967184  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:21.969863  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970301  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:21.970330  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:21.970473  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:21.970662  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970806  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:21.970980  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:21.971184  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:21.971397  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:21.971411  619737 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-418372 && echo "bridge-418372" | sudo tee /etc/hostname
	I0127 14:30:22.088428  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-418372
	
	I0127 14:30:22.088472  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.091063  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091586  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.091611  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.091821  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.092004  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092139  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.092303  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.092514  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.092705  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.092732  619737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-418372' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-418372/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-418372' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:30:22.206493  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:30:22.206523  619737 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20327-555419/.minikube CaCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20327-555419/.minikube}
	I0127 14:30:22.206555  619737 buildroot.go:174] setting up certificates
	I0127 14:30:22.206570  619737 provision.go:84] configureAuth start
	I0127 14:30:22.206580  619737 main.go:141] libmachine: (bridge-418372) Calling .GetMachineName
	I0127 14:30:22.206870  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:22.209586  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.209920  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.209959  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.210081  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.212164  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212510  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.212527  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.212711  619737 provision.go:143] copyHostCerts
	I0127 14:30:22.212761  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem, removing ...
	I0127 14:30:22.212785  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem
	I0127 14:30:22.212874  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/key.pem (1675 bytes)
	I0127 14:30:22.213016  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem, removing ...
	I0127 14:30:22.213027  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem
	I0127 14:30:22.213064  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/ca.pem (1078 bytes)
	I0127 14:30:22.213138  619737 exec_runner.go:144] found /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem, removing ...
	I0127 14:30:22.213146  619737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem
	I0127 14:30:22.213168  619737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20327-555419/.minikube/cert.pem (1123 bytes)
	I0127 14:30:22.213230  619737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem org=jenkins.bridge-418372 san=[127.0.0.1 192.168.72.158 bridge-418372 localhost minikube]
	I0127 14:30:22.548623  619737 provision.go:177] copyRemoteCerts
	I0127 14:30:22.548680  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:30:22.548706  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.551241  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551575  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.551604  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.551796  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.552020  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.552246  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.552395  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:22.643890  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:30:22.670713  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:30:22.693627  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 14:30:22.717638  619737 provision.go:87] duration metric: took 511.05611ms to configureAuth
	I0127 14:30:22.717668  619737 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:30:22.717835  619737 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:22.717935  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.720466  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.720835  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.720865  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.721045  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.721238  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721385  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.721514  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.721646  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:22.721822  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:22.721844  619737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:30:22.938113  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:30:22.938145  619737 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:30:22.938155  619737 main.go:141] libmachine: (bridge-418372) Calling .GetURL
	I0127 14:30:22.939593  619737 main.go:141] libmachine: (bridge-418372) DBG | using libvirt version 6000000
	I0127 14:30:22.942205  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942565  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.942607  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.942749  619737 main.go:141] libmachine: Docker is up and running!
	I0127 14:30:22.942779  619737 main.go:141] libmachine: Reticulating splines...
	I0127 14:30:22.942791  619737 client.go:171] duration metric: took 24.418851853s to LocalClient.Create
	I0127 14:30:22.942815  619737 start.go:167] duration metric: took 24.418910733s to libmachine.API.Create "bridge-418372"
	I0127 14:30:22.942825  619737 start.go:293] postStartSetup for "bridge-418372" (driver="kvm2")
	I0127 14:30:22.942834  619737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:30:22.942854  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:22.943081  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:30:22.943104  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:22.945274  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945649  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:22.945678  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:22.945844  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:22.946014  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:22.946145  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:22.946279  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.027435  619737 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:30:23.031408  619737 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:30:23.031432  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/addons for local assets ...
	I0127 14:30:23.031490  619737 filesync.go:126] Scanning /home/jenkins/minikube-integration/20327-555419/.minikube/files for local assets ...
	I0127 14:30:23.031589  619737 filesync.go:149] local asset: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem -> 5626362.pem in /etc/ssl/certs
	I0127 14:30:23.031684  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 14:30:23.041098  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:23.064771  619737 start.go:296] duration metric: took 121.935009ms for postStartSetup
	I0127 14:30:23.064822  619737 main.go:141] libmachine: (bridge-418372) Calling .GetConfigRaw
	I0127 14:30:23.065340  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.068126  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068566  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.068585  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.068850  619737 profile.go:143] Saving config to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/config.json ...
	I0127 14:30:23.069082  619737 start.go:128] duration metric: took 24.564244155s to createHost
	I0127 14:30:23.069112  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.071565  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.071930  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.071958  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.072093  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.072294  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072485  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.072602  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.072779  619737 main.go:141] libmachine: Using SSH client type: native
	I0127 14:30:23.072928  619737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.158 22 <nil> <nil>}
	I0127 14:30:23.072937  619737 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:30:23.173863  619737 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737988223.150041878
	
	I0127 14:30:23.173884  619737 fix.go:216] guest clock: 1737988223.150041878
	I0127 14:30:23.173890  619737 fix.go:229] Guest: 2025-01-27 14:30:23.150041878 +0000 UTC Remote: 2025-01-27 14:30:23.069097778 +0000 UTC m=+24.679552593 (delta=80.9441ms)
	I0127 14:30:23.173936  619737 fix.go:200] guest clock delta is within tolerance: 80.9441ms
	I0127 14:30:23.173948  619737 start.go:83] releasing machines lock for "bridge-418372", held for 24.669221959s
	I0127 14:30:23.173973  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.174207  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:23.176840  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177209  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.177240  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.177413  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.177905  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178089  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:23.178172  619737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:30:23.178218  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.178318  619737 ssh_runner.go:195] Run: cat /version.json
	I0127 14:30:23.178350  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:23.181082  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181120  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181443  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181470  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181496  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:23.181513  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:23.181567  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181734  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.181816  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:23.181907  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.181974  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:23.182052  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.182110  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:23.182209  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:23.254783  619737 ssh_runner.go:195] Run: systemctl --version
	I0127 14:30:23.277936  619737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:30:18.443736  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:20.942676  618007 node_ready.go:53] node "flannel-418372" has status "Ready":"False"
	I0127 14:30:21.452564  618007 node_ready.go:49] node "flannel-418372" has status "Ready":"True"
	I0127 14:30:21.452591  618007 node_ready.go:38] duration metric: took 22.012579891s for node "flannel-418372" to be "Ready" ...
	I0127 14:30:21.452602  618007 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:21.461767  618007 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:23.436466  619737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:30:23.443141  619737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:30:23.443197  619737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:30:23.460545  619737 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:30:23.460567  619737 start.go:495] detecting cgroup driver to use...
	I0127 14:30:23.460628  619737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:30:23.479133  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:30:23.494546  619737 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:30:23.494614  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:30:23.508408  619737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:30:23.521348  619737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:30:23.635456  619737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:30:23.765321  619737 docker.go:233] disabling docker service ...
	I0127 14:30:23.765393  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:30:23.778859  619737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:30:23.790920  619737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:30:23.924634  619737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:30:24.053414  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:30:24.066957  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:30:24.085971  619737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:30:24.086040  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.096202  619737 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:30:24.096256  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.106388  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.116650  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.127369  619737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:30:24.137556  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.147564  619737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.166019  619737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:30:24.176231  619737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:30:24.185246  619737 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:30:24.185296  619737 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:30:24.198571  619737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:30:24.207701  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:24.326803  619737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:30:24.416087  619737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:30:24.416166  619737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:30:24.421135  619737 start.go:563] Will wait 60s for crictl version
	I0127 14:30:24.421191  619737 ssh_runner.go:195] Run: which crictl
	I0127 14:30:24.425096  619737 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:30:24.467553  619737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:30:24.467656  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.494858  619737 ssh_runner.go:195] Run: crio --version
	I0127 14:30:24.523951  619737 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:30:24.525015  619737 main.go:141] libmachine: (bridge-418372) Calling .GetIP
	I0127 14:30:24.527690  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528062  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:24.528102  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:24.528378  619737 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 14:30:24.532290  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:24.545520  619737 kubeadm.go:883] updating cluster {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:30:24.545653  619737 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:30:24.545722  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:24.578117  619737 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:30:24.578183  619737 ssh_runner.go:195] Run: which lz4
	I0127 14:30:24.581940  619737 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:30:24.585899  619737 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:30:24.585926  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:30:26.046393  619737 crio.go:462] duration metric: took 1.464480043s to copy over tarball
	I0127 14:30:26.046476  619737 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:30:28.286060  619737 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.239526518s)
	I0127 14:30:28.286090  619737 crio.go:469] duration metric: took 2.239666444s to extract the tarball
	I0127 14:30:28.286098  619737 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:30:28.329925  619737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:30:28.372463  619737 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:30:28.372493  619737 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:30:28.372506  619737 kubeadm.go:934] updating node { 192.168.72.158 8443 v1.32.1 crio true true} ...
	I0127 14:30:28.372639  619737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-418372 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0127 14:30:28.372730  619737 ssh_runner.go:195] Run: crio config
	I0127 14:30:23.469182  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:25.470378  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:27.969278  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:28.431389  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:30:28.431419  619737 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:30:28.431445  619737 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.158 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-418372 NodeName:bridge-418372 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:30:28.431596  619737 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-418372"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:30:28.431664  619737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:30:28.443712  619737 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:30:28.443775  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:30:28.453106  619737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:30:28.472323  619737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:30:28.488568  619737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:30:28.505501  619737 ssh_runner.go:195] Run: grep 192.168.72.158	control-plane.minikube.internal$ /etc/hosts
	I0127 14:30:28.509628  619737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:30:28.522026  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:28.644859  619737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:30:28.660903  619737 certs.go:68] Setting up /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372 for IP: 192.168.72.158
	I0127 14:30:28.660924  619737 certs.go:194] generating shared ca certs ...
	I0127 14:30:28.660945  619737 certs.go:226] acquiring lock for ca certs: {Name:mk51b28ee386f676931205574822c74a9ffc3062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.661145  619737 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key
	I0127 14:30:28.661204  619737 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key
	I0127 14:30:28.661218  619737 certs.go:256] generating profile certs ...
	I0127 14:30:28.661295  619737 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key
	I0127 14:30:28.661316  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt with IP's: []
	I0127 14:30:28.906551  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt ...
	I0127 14:30:28.906578  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.crt: {Name:mk1e2537950485aa8b4f79c1832edd87a69fac76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906770  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key ...
	I0127 14:30:28.906787  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/client.key: {Name:mkefc91979c182951e8440280201021e6feaf0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:28.906903  619737 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b
	I0127 14:30:28.906926  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.158]
	I0127 14:30:29.091201  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b ...
	I0127 14:30:29.091235  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b: {Name:mkd8eb8b7ce81ecb1ea18b8612606f856d364bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091400  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b ...
	I0127 14:30:29.091415  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b: {Name:mk69a1ca35d981f975238e5836687217bd190f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.091489  619737 certs.go:381] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt
	I0127 14:30:29.091560  619737 certs.go:385] copying /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key.026b2f5b -> /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key
	I0127 14:30:29.091639  619737 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key
	I0127 14:30:29.091657  619737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt with IP's: []
	I0127 14:30:29.149860  619737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt ...
	I0127 14:30:29.149879  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt: {Name:mk7035d438a8cb1c492fb958853882394afbe27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.149993  619737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key ...
	I0127 14:30:29.150004  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key: {Name:mka8c6fd9acdaec459c9ef3e4dfbb4b5c5547317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:29.150161  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem (1338 bytes)
	W0127 14:30:29.150202  619737 certs.go:480] ignoring /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636_empty.pem, impossibly tiny 0 bytes
	I0127 14:30:29.150212  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 14:30:29.150232  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:30:29.150253  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:30:29.150272  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/certs/key.pem (1675 bytes)
	I0127 14:30:29.150313  619737 certs.go:484] found cert: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem (1708 bytes)
	I0127 14:30:29.150944  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:30:29.175883  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 14:30:29.199205  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:30:29.222754  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:30:29.245909  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:30:29.269824  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 14:30:29.292470  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:30:29.315043  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/bridge-418372/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:30:29.354655  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/certs/562636.pem --> /usr/share/ca-certificates/562636.pem (1338 bytes)
	I0127 14:30:29.383756  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/ssl/certs/5626362.pem --> /usr/share/ca-certificates/5626362.pem (1708 bytes)
	I0127 14:30:29.416181  619737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:30:29.439715  619737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:30:29.456721  619737 ssh_runner.go:195] Run: openssl version
	I0127 14:30:29.464239  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:30:29.475723  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480470  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 13:03 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.480515  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:30:29.486322  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:30:29.496846  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/562636.pem && ln -fs /usr/share/ca-certificates/562636.pem /etc/ssl/certs/562636.pem"
	I0127 14:30:29.507085  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511703  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 13:11 /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.511754  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/562636.pem
	I0127 14:30:29.517449  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/562636.pem /etc/ssl/certs/51391683.0"
	I0127 14:30:29.527666  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5626362.pem && ln -fs /usr/share/ca-certificates/5626362.pem /etc/ssl/certs/5626362.pem"
	I0127 14:30:29.540074  619737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544916  619737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 13:11 /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.544955  619737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5626362.pem
	I0127 14:30:29.551000  619737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5626362.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 14:30:29.562167  619737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:30:29.566616  619737 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:30:29.566681  619737 kubeadm.go:392] StartCluster: {Name:bridge-418372 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-418372 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:30:29.566758  619737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:30:29.566808  619737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:30:29.609003  619737 cri.go:89] found id: ""
	I0127 14:30:29.609076  619737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:30:29.618951  619737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:30:29.628562  619737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:30:29.637724  619737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:30:29.637742  619737 kubeadm.go:157] found existing configuration files:
	
	I0127 14:30:29.637782  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:30:29.648947  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:30:29.648987  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:30:29.657991  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:30:29.666526  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:30:29.666559  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:30:29.676483  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.685024  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:30:29.685073  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:30:29.693937  619737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:30:29.702972  619737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:30:29.703020  619737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:30:29.712304  619737 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:30:29.774803  619737 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:30:29.774988  619737 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:30:29.875816  619737 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:30:29.875979  619737 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:30:29.876114  619737 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:30:29.888173  619737 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:30:29.945220  619737 out.go:235]   - Generating certificates and keys ...
	I0127 14:30:29.945359  619737 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:30:29.945448  619737 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:30:30.158542  619737 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:30:30.651792  619737 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:30:30.728655  619737 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:30:30.849544  619737 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:30:31.081949  619737 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:30:31.082098  619737 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.339755  619737 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:30:31.339980  619737 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-418372 localhost] and IPs [192.168.72.158 127.0.0.1 ::1]
	I0127 14:30:31.556885  619737 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:30:31.958984  619737 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:30:32.398271  619737 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:30:32.398452  619737 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:30:32.525025  619737 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:30:32.699085  619737 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:30:33.067374  619737 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:30:33.229761  619737 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:30:30.074789  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:32.468447  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:33.740325  619737 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:30:33.741768  619737 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:30:33.745759  619737 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:30:34.472510  618007 pod_ready.go:103] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:35.969131  618007 pod_ready.go:93] pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.969163  618007 pod_ready.go:82] duration metric: took 14.507366859s for pod "coredns-668d6bf9bc-jnmf4" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.969178  618007 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.974351  618007 pod_ready.go:93] pod "etcd-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.974376  618007 pod_ready.go:82] duration metric: took 5.188773ms for pod "etcd-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.974389  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.979590  618007 pod_ready.go:93] pod "kube-apiserver-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.979610  618007 pod_ready.go:82] duration metric: took 5.212396ms for pod "kube-apiserver-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.979623  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.984005  618007 pod_ready.go:93] pod "kube-controller-manager-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.984026  618007 pod_ready.go:82] duration metric: took 4.395194ms for pod "kube-controller-manager-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.984035  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-5gszq" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.988140  618007 pod_ready.go:93] pod "kube-proxy-5gszq" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:35.988163  618007 pod_ready.go:82] duration metric: took 4.120445ms for pod "kube-proxy-5gszq" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:35.988179  618007 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:36.366430  618007 pod_ready.go:93] pod "kube-scheduler-flannel-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:36.366453  618007 pod_ready.go:82] duration metric: took 378.266563ms for pod "kube-scheduler-flannel-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:36.366464  618007 pod_ready.go:39] duration metric: took 14.913850556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:36.366482  618007 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:30:36.366541  618007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:30:36.387742  618007 api_server.go:72] duration metric: took 37.787293582s to wait for apiserver process to appear ...
	I0127 14:30:36.387769  618007 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:30:36.387798  618007 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I0127 14:30:36.394095  618007 api_server.go:279] https://192.168.50.236:8443/healthz returned 200:
	ok
	I0127 14:30:36.395090  618007 api_server.go:141] control plane version: v1.32.1
	I0127 14:30:36.395112  618007 api_server.go:131] duration metric: took 7.335974ms to wait for apiserver health ...
	I0127 14:30:36.395120  618007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:30:36.572676  618007 system_pods.go:59] 7 kube-system pods found
	I0127 14:30:36.572713  618007 system_pods.go:61] "coredns-668d6bf9bc-jnmf4" [c977d232-5060-4bb7-8a11-1834ac61ef70] Running
	I0127 14:30:36.572722  618007 system_pods.go:61] "etcd-flannel-418372" [b6786b38-1937-4cfb-8a7b-d27847d7c390] Running
	I0127 14:30:36.572732  618007 system_pods.go:61] "kube-apiserver-flannel-418372" [94d4d209-0533-4c6d-92fc-5de7f59a5ca5] Running
	I0127 14:30:36.572739  618007 system_pods.go:61] "kube-controller-manager-flannel-418372" [c09eb55e-c216-472e-bec3-74d7bdd0d915] Running
	I0127 14:30:36.572747  618007 system_pods.go:61] "kube-proxy-5gszq" [11888572-b936-4c6b-99f3-8469d40359e5] Running
	I0127 14:30:36.572752  618007 system_pods.go:61] "kube-scheduler-flannel-418372" [8954ced6-4dda-4e4e-bcfc-19caef64932d] Running
	I0127 14:30:36.572757  618007 system_pods.go:61] "storage-provisioner" [f1193abf-2fe5-4e06-a829-d9b51a5cd773] Running
	I0127 14:30:36.572767  618007 system_pods.go:74] duration metric: took 177.638734ms to wait for pod list to return data ...
	I0127 14:30:36.572777  618007 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:30:36.767343  618007 default_sa.go:45] found service account: "default"
	I0127 14:30:36.767380  618007 default_sa.go:55] duration metric: took 194.588661ms for default service account to be created ...
	I0127 14:30:36.767392  618007 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:30:36.972476  618007 system_pods.go:87] 7 kube-system pods found
	I0127 14:30:37.166823  618007 system_pods.go:105] "coredns-668d6bf9bc-jnmf4" [c977d232-5060-4bb7-8a11-1834ac61ef70] Running
	I0127 14:30:37.166851  618007 system_pods.go:105] "etcd-flannel-418372" [b6786b38-1937-4cfb-8a7b-d27847d7c390] Running
	I0127 14:30:37.166858  618007 system_pods.go:105] "kube-apiserver-flannel-418372" [94d4d209-0533-4c6d-92fc-5de7f59a5ca5] Running
	I0127 14:30:37.166866  618007 system_pods.go:105] "kube-controller-manager-flannel-418372" [c09eb55e-c216-472e-bec3-74d7bdd0d915] Running
	I0127 14:30:37.166873  618007 system_pods.go:105] "kube-proxy-5gszq" [11888572-b936-4c6b-99f3-8469d40359e5] Running
	I0127 14:30:37.166880  618007 system_pods.go:105] "kube-scheduler-flannel-418372" [8954ced6-4dda-4e4e-bcfc-19caef64932d] Running
	I0127 14:30:37.166887  618007 system_pods.go:105] "storage-provisioner" [f1193abf-2fe5-4e06-a829-d9b51a5cd773] Running
	I0127 14:30:37.166898  618007 system_pods.go:147] duration metric: took 399.497203ms to wait for k8s-apps to be running ...
	I0127 14:30:37.166907  618007 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:30:37.166960  618007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:37.184544  618007 system_svc.go:56] duration metric: took 17.628067ms WaitForService to wait for kubelet
	I0127 14:30:37.184580  618007 kubeadm.go:582] duration metric: took 38.584133747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:30:37.184603  618007 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:30:37.366971  618007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:30:37.367010  618007 node_conditions.go:123] node cpu capacity is 2
	I0127 14:30:37.367029  618007 node_conditions.go:105] duration metric: took 182.419341ms to run NodePressure ...
	I0127 14:30:37.367045  618007 start.go:241] waiting for startup goroutines ...
	I0127 14:30:37.367054  618007 start.go:246] waiting for cluster config update ...
	I0127 14:30:37.367071  618007 start.go:255] writing updated cluster config ...
	I0127 14:30:37.367429  618007 ssh_runner.go:195] Run: rm -f paused
	I0127 14:30:37.421409  618007 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:30:37.423206  618007 out.go:177] * Done! kubectl is now configured to use "flannel-418372" cluster and "default" namespace by default
	I0127 14:30:33.812497  619737 out.go:235]   - Booting up control plane ...
	I0127 14:30:33.812717  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:30:33.812863  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:30:33.812961  619737 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:30:33.813094  619737 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:30:33.813279  619737 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:30:33.813350  619737 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:30:33.921105  619737 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:30:33.921239  619737 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:30:34.923796  619737 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003590789s
	I0127 14:30:34.923910  619737 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:30:39.924192  619737 kubeadm.go:310] [api-check] The API server is healthy after 5.001293699s
	I0127 14:30:39.935144  619737 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:30:39.958823  619737 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:30:39.996057  619737 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:30:39.996312  619737 kubeadm.go:310] [mark-control-plane] Marking the node bridge-418372 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:30:40.010139  619737 kubeadm.go:310] [bootstrap-token] Using token: r7ccxo.kgv6nq8qhg7ecp3z
	I0127 14:30:40.011473  619737 out.go:235]   - Configuring RBAC rules ...
	I0127 14:30:40.011597  619737 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:30:40.020901  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:30:40.029801  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:30:40.033037  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:30:40.036413  619737 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:30:40.039570  619737 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:30:40.328924  619737 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:30:40.747593  619737 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:30:41.328255  619737 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:30:41.329246  619737 kubeadm.go:310] 
	I0127 14:30:41.329310  619737 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:30:41.329319  619737 kubeadm.go:310] 
	I0127 14:30:41.329399  619737 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:30:41.329407  619737 kubeadm.go:310] 
	I0127 14:30:41.329428  619737 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:30:41.329482  619737 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:30:41.329526  619737 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:30:41.329555  619737 kubeadm.go:310] 
	I0127 14:30:41.329655  619737 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:30:41.329665  619737 kubeadm.go:310] 
	I0127 14:30:41.329745  619737 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:30:41.329766  619737 kubeadm.go:310] 
	I0127 14:30:41.329851  619737 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:30:41.329954  619737 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:30:41.330056  619737 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:30:41.330071  619737 kubeadm.go:310] 
	I0127 14:30:41.330176  619737 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:30:41.330296  619737 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:30:41.330314  619737 kubeadm.go:310] 
	I0127 14:30:41.330417  619737 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7ccxo.kgv6nq8qhg7ecp3z \
	I0127 14:30:41.330548  619737 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 \
	I0127 14:30:41.330576  619737 kubeadm.go:310] 	--control-plane 
	I0127 14:30:41.330582  619737 kubeadm.go:310] 
	I0127 14:30:41.330649  619737 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:30:41.330656  619737 kubeadm.go:310] 
	I0127 14:30:41.330721  619737 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7ccxo.kgv6nq8qhg7ecp3z \
	I0127 14:30:41.330803  619737 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a60ff6161e02b5a75df4f173d820326404ac2037065d4322193a60c87e11fb02 
	I0127 14:30:41.331862  619737 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:30:41.331884  619737 cni.go:84] Creating CNI manager for "bridge"
	I0127 14:30:41.333393  619737 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:30:41.334528  619737 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:30:41.347863  619737 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:30:41.370602  619737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:30:41.370705  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:41.370715  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-418372 minikube.k8s.io/updated_at=2025_01_27T14_30_41_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a23717f006184090cd3c7894641a342ba4ae8c4d minikube.k8s.io/name=bridge-418372 minikube.k8s.io/primary=true
	I0127 14:30:41.535021  619737 ops.go:34] apiserver oom_adj: -16
	I0127 14:30:41.535150  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:42.035857  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:42.535777  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:43.035364  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:43.535454  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.035873  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.535827  619737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:30:44.617399  619737 kubeadm.go:1113] duration metric: took 3.24676419s to wait for elevateKubeSystemPrivileges
	I0127 14:30:44.617441  619737 kubeadm.go:394] duration metric: took 15.050776308s to StartCluster
	I0127 14:30:44.617463  619737 settings.go:142] acquiring lock: {Name:mk3584d1c70a231ddef63c926d3bba51690f47f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:44.617560  619737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:30:44.620051  619737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20327-555419/kubeconfig: {Name:mk8c16ea416e86f841466e2c884d68572c62219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:30:44.620334  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:30:44.620353  619737 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.158 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:30:44.620428  619737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 14:30:44.620531  619737 addons.go:69] Setting storage-provisioner=true in profile "bridge-418372"
	I0127 14:30:44.620558  619737 addons.go:238] Setting addon storage-provisioner=true in "bridge-418372"
	I0127 14:30:44.620565  619737 addons.go:69] Setting default-storageclass=true in profile "bridge-418372"
	I0127 14:30:44.620590  619737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-418372"
	I0127 14:30:44.620600  619737 host.go:66] Checking if "bridge-418372" exists ...
	I0127 14:30:44.620555  619737 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:30:44.621004  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.621004  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.621053  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.621060  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.622030  619737 out.go:177] * Verifying Kubernetes components...
	I0127 14:30:44.623413  619737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:30:44.638348  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0127 14:30:44.638411  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0127 14:30:44.638914  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.638980  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.639484  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.639505  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.639683  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.639713  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.639863  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.640136  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.640333  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.640456  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.640501  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.644089  619737 addons.go:238] Setting addon default-storageclass=true in "bridge-418372"
	I0127 14:30:44.644125  619737 host.go:66] Checking if "bridge-418372" exists ...
	I0127 14:30:44.644404  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.644446  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.659927  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0127 14:30:44.660334  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.660485  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0127 14:30:44.660844  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.660864  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.660884  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.661227  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.661372  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.661395  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.661697  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.661858  619737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:30:44.661873  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.661898  619737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:30:44.663597  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:44.665488  619737 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:30:44.666780  619737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:30:44.666804  619737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:30:44.666825  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:44.672578  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.673044  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:44.673131  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.673270  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:44.673488  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:44.673671  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:44.673816  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:44.681682  619737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0127 14:30:44.682285  619737 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:30:44.682855  619737 main.go:141] libmachine: Using API Version  1
	I0127 14:30:44.682881  619737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:30:44.683214  619737 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:30:44.683429  619737 main.go:141] libmachine: (bridge-418372) Calling .GetState
	I0127 14:30:44.685036  619737 main.go:141] libmachine: (bridge-418372) Calling .DriverName
	I0127 14:30:44.685243  619737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:30:44.685260  619737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:30:44.685278  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHHostname
	I0127 14:30:44.688145  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.688619  619737 main.go:141] libmachine: (bridge-418372) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:a5:5b", ip: ""} in network mk-bridge-418372: {Iface:virbr1 ExpiryTime:2025-01-27 15:30:13 +0000 UTC Type:0 Mac:52:54:00:34:a5:5b Iaid: IPaddr:192.168.72.158 Prefix:24 Hostname:bridge-418372 Clientid:01:52:54:00:34:a5:5b}
	I0127 14:30:44.688643  619737 main.go:141] libmachine: (bridge-418372) DBG | domain bridge-418372 has defined IP address 192.168.72.158 and MAC address 52:54:00:34:a5:5b in network mk-bridge-418372
	I0127 14:30:44.688793  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHPort
	I0127 14:30:44.688984  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHKeyPath
	I0127 14:30:44.689180  619737 main.go:141] libmachine: (bridge-418372) Calling .GetSSHUsername
	I0127 14:30:44.689327  619737 sshutil.go:53] new ssh client: &{IP:192.168.72.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/bridge-418372/id_rsa Username:docker}
	I0127 14:30:44.781712  619737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:30:44.825946  619737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:30:44.954327  619737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:30:44.981928  619737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:30:45.193916  619737 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 14:30:45.195924  619737 node_ready.go:35] waiting up to 15m0s for node "bridge-418372" to be "Ready" ...
	I0127 14:30:45.209959  619737 node_ready.go:49] node "bridge-418372" has status "Ready":"True"
	I0127 14:30:45.209983  619737 node_ready.go:38] duration metric: took 14.022807ms for node "bridge-418372" to be "Ready" ...
	I0127 14:30:45.209994  619737 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:45.230141  619737 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:45.248044  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.248072  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.248403  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.248459  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.248473  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.248482  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.248433  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.248748  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.248801  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.248762  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.254357  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.254379  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.254623  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.254643  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582068  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.582101  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.582471  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.582507  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.582518  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582563  619737 main.go:141] libmachine: Making call to close driver server
	I0127 14:30:45.582576  619737 main.go:141] libmachine: (bridge-418372) Calling .Close
	I0127 14:30:45.582914  619737 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:30:45.582964  619737 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:30:45.582961  619737 main.go:141] libmachine: (bridge-418372) DBG | Closing plugin on server side
	I0127 14:30:45.584401  619737 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 14:30:45.585530  619737 addons.go:514] duration metric: took 965.103449ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 14:30:45.700598  619737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-418372" context rescaled to 1 replicas
	I0127 14:30:46.237276  619737 pod_ready.go:93] pod "etcd-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:46.237378  619737 pod_ready.go:82] duration metric: took 1.007212145s for pod "etcd-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:46.237408  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:48.312592  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:50.743934  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:52.744428  619737 pod_ready.go:103] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"False"
	I0127 14:30:54.244761  619737 pod_ready.go:93] pod "kube-apiserver-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.244788  619737 pod_ready.go:82] duration metric: took 8.007362536s for pod "kube-apiserver-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.244803  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.249656  619737 pod_ready.go:93] pod "kube-controller-manager-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.249672  619737 pod_ready.go:82] duration metric: took 4.861469ms for pod "kube-controller-manager-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.249681  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-srq4p" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.254195  619737 pod_ready.go:93] pod "kube-proxy-srq4p" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.254210  619737 pod_ready.go:82] duration metric: took 4.523332ms for pod "kube-proxy-srq4p" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.254218  619737 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.258549  619737 pod_ready.go:93] pod "kube-scheduler-bridge-418372" in "kube-system" namespace has status "Ready":"True"
	I0127 14:30:54.258563  619737 pod_ready.go:82] duration metric: took 4.340039ms for pod "kube-scheduler-bridge-418372" in "kube-system" namespace to be "Ready" ...
	I0127 14:30:54.258569  619737 pod_ready.go:39] duration metric: took 9.048563243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:30:54.258586  619737 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:30:54.258635  619737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:30:54.275369  619737 api_server.go:72] duration metric: took 9.654981576s to wait for apiserver process to appear ...
	I0127 14:30:54.275386  619737 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:30:54.275399  619737 api_server.go:253] Checking apiserver healthz at https://192.168.72.158:8443/healthz ...
	I0127 14:30:54.279770  619737 api_server.go:279] https://192.168.72.158:8443/healthz returned 200:
	ok
	I0127 14:30:54.280702  619737 api_server.go:141] control plane version: v1.32.1
	I0127 14:30:54.280724  619737 api_server.go:131] duration metric: took 5.331614ms to wait for apiserver health ...
	I0127 14:30:54.280731  619737 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:30:54.284562  619737 system_pods.go:59] 7 kube-system pods found
	I0127 14:30:54.284585  619737 system_pods.go:61] "coredns-668d6bf9bc-bxt2d" [30688a6a-decf-494a-892c-246d5fd4ae17] Running
	I0127 14:30:54.284591  619737 system_pods.go:61] "etcd-bridge-418372" [2c893afa-1f78-4889-9a64-8e6976949658] Running
	I0127 14:30:54.284595  619737 system_pods.go:61] "kube-apiserver-bridge-418372" [e70ad4b0-21ca-4833-b5f6-46fe9d39dbad] Running
	I0127 14:30:54.284599  619737 system_pods.go:61] "kube-controller-manager-bridge-418372" [e2272719-2527-4148-a4f0-13395e47ee74] Running
	I0127 14:30:54.284602  619737 system_pods.go:61] "kube-proxy-srq4p" [bbca3a8d-4a8a-474b-b117-77557ced6ccb] Running
	I0127 14:30:54.284606  619737 system_pods.go:61] "kube-scheduler-bridge-418372" [8cd36bf9-5b97-4ee8-871c-5d15211c4106] Running
	I0127 14:30:54.284609  619737 system_pods.go:61] "storage-provisioner" [69dac337-57c8-495b-9c4c-9f6d81adccaf] Running
	I0127 14:30:54.284615  619737 system_pods.go:74] duration metric: took 3.878571ms to wait for pod list to return data ...
	I0127 14:30:54.284624  619737 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:30:54.286667  619737 default_sa.go:45] found service account: "default"
	I0127 14:30:54.286687  619737 default_sa.go:55] duration metric: took 2.056793ms for default service account to be created ...
	I0127 14:30:54.286697  619737 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:30:54.447492  619737 system_pods.go:87] 7 kube-system pods found
	I0127 14:30:54.642792  619737 system_pods.go:105] "coredns-668d6bf9bc-bxt2d" [30688a6a-decf-494a-892c-246d5fd4ae17] Running
	I0127 14:30:54.642812  619737 system_pods.go:105] "etcd-bridge-418372" [2c893afa-1f78-4889-9a64-8e6976949658] Running
	I0127 14:30:54.642816  619737 system_pods.go:105] "kube-apiserver-bridge-418372" [e70ad4b0-21ca-4833-b5f6-46fe9d39dbad] Running
	I0127 14:30:54.642821  619737 system_pods.go:105] "kube-controller-manager-bridge-418372" [e2272719-2527-4148-a4f0-13395e47ee74] Running
	I0127 14:30:54.642826  619737 system_pods.go:105] "kube-proxy-srq4p" [bbca3a8d-4a8a-474b-b117-77557ced6ccb] Running
	I0127 14:30:54.642830  619737 system_pods.go:105] "kube-scheduler-bridge-418372" [8cd36bf9-5b97-4ee8-871c-5d15211c4106] Running
	I0127 14:30:54.642835  619737 system_pods.go:105] "storage-provisioner" [69dac337-57c8-495b-9c4c-9f6d81adccaf] Running
	I0127 14:30:54.642842  619737 system_pods.go:147] duration metric: took 356.138334ms to wait for k8s-apps to be running ...
	I0127 14:30:54.642848  619737 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:30:54.642892  619737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:30:54.663960  619737 system_svc.go:56] duration metric: took 21.1006ms WaitForService to wait for kubelet
	I0127 14:30:54.663982  619737 kubeadm.go:582] duration metric: took 10.043596268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:30:54.664012  619737 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:30:54.842788  619737 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:30:54.842821  619737 node_conditions.go:123] node cpu capacity is 2
	I0127 14:30:54.842841  619737 node_conditions.go:105] duration metric: took 178.823111ms to run NodePressure ...
	I0127 14:30:54.842855  619737 start.go:241] waiting for startup goroutines ...
	I0127 14:30:54.842864  619737 start.go:246] waiting for cluster config update ...
	I0127 14:30:54.842879  619737 start.go:255] writing updated cluster config ...
	I0127 14:30:54.843155  619737 ssh_runner.go:195] Run: rm -f paused
	I0127 14:30:54.893663  619737 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:30:54.895559  619737 out.go:177] * Done! kubectl is now configured to use "bridge-418372" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.102056832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988629102035553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=593d5c8c-bf60-419e-b6ca-85b52c56ce80 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.102531968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb5200ef-b33e-41c5-976d-78890cc24848 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.102572033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb5200ef-b33e-41c5-976d-78890cc24848 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.102608949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eb5200ef-b33e-41c5-976d-78890cc24848 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.130602337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e9b6998-fc85-422e-966b-2471b9d2ff51 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.130657525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e9b6998-fc85-422e-966b-2471b9d2ff51 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.132101686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4752633-0496-48c2-a27c-596972c1b4a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.132490887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988629132468444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4752633-0496-48c2-a27c-596972c1b4a3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.132980031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e26208c9-cdd4-4811-a66f-02604da497ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.133027781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e26208c9-cdd4-4811-a66f-02604da497ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.133059059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e26208c9-cdd4-4811-a66f-02604da497ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.163001090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=705aec8f-0895-44d4-92eb-0c8dddfe1c2c name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.163085782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=705aec8f-0895-44d4-92eb-0c8dddfe1c2c name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.164661223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c628249-6b71-43e0-a43f-9f85e0de32ba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.165139068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988629165111938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c628249-6b71-43e0-a43f-9f85e0de32ba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.165799687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f07564c0-7390-46ee-a924-3add02314cdc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.165870848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f07564c0-7390-46ee-a924-3add02314cdc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.165916196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f07564c0-7390-46ee-a924-3add02314cdc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.195566240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45bd3c4d-5b36-4344-adde-68d2e730356b name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.195625705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45bd3c4d-5b36-4344-adde-68d2e730356b name=/runtime.v1.RuntimeService/Version
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.197189140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=703112c8-630f-4d97-a9a9-1d8874ca0ffb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.197657672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988629197636398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=703112c8-630f-4d97-a9a9-1d8874ca0ffb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.198505567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54248cdc-5c19-47dd-88a9-eeb2c0a9dbac name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.198548077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54248cdc-5c19-47dd-88a9-eeb2c0a9dbac name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:37:09 old-k8s-version-456130 crio[624]: time="2025-01-27 14:37:09.198589594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=54248cdc-5c19-47dd-88a9-eeb2c0a9dbac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 14:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051913] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040598] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.061734] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.852778] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.633968] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.949773] systemd-fstab-generator[550]: Ignoring "noauto" option for root device
	[  +0.054597] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055817] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.196544] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.123926] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.248912] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +6.669224] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.071494] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.203893] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[ +13.783962] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 14:17] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Jan27 14:19] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.080632] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:37:09 up 23 min,  0 users,  load average: 0.20, 0.11, 0.06
	Linux old-k8s-version-456130 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009186f0)
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00095bef0, 0x4f0ac20, 0xc000b31ea0, 0x1, 0xc0001000c0)
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8d20, 0xc0001000c0)
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc06c0, 0xc000bf8260)
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 14:37:03 old-k8s-version-456130 kubelet[7241]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 14:37:03 old-k8s-version-456130 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 14:37:03 old-k8s-version-456130 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 14:37:04 old-k8s-version-456130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 181.
	Jan 27 14:37:04 old-k8s-version-456130 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 14:37:04 old-k8s-version-456130 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 14:37:04 old-k8s-version-456130 kubelet[7249]: I0127 14:37:04.617115    7249 server.go:416] Version: v1.20.0
	Jan 27 14:37:04 old-k8s-version-456130 kubelet[7249]: I0127 14:37:04.617500    7249 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 14:37:04 old-k8s-version-456130 kubelet[7249]: I0127 14:37:04.621253    7249 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 14:37:04 old-k8s-version-456130 kubelet[7249]: W0127 14:37:04.623056    7249 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 14:37:04 old-k8s-version-456130 kubelet[7249]: I0127 14:37:04.623269    7249 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 2 (233.268536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-456130" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (391.69s)

                                                
                                    

Test pass (266/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.1/json-events 3.89
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 83.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 129.91
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.48
35 TestAddons/parallel/Registry 19.55
37 TestAddons/parallel/InspektorGadget 12.02
38 TestAddons/parallel/MetricsServer 7.08
40 TestAddons/parallel/CSI 60.37
41 TestAddons/parallel/Headlamp 20.35
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 53.01
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 12.16
47 TestAddons/StoppedEnableDisable 91.11
48 TestCertOptions 60.36
49 TestCertExpiration 290.94
51 TestForceSystemdFlag 72.18
52 TestForceSystemdEnv 66.21
54 TestKVMDriverInstallOrUpdate 1.34
58 TestErrorSpam/setup 40.39
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 4.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.16
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 36.09
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
75 TestFunctional/serial/CacheCmd/cache/add_local 1.05
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 35.88
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.43
86 TestFunctional/serial/LogsFileCmd 1.35
87 TestFunctional/serial/InvalidService 4.26
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 30.71
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.94
97 TestFunctional/parallel/ServiceCmdConnect 11.46
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 56.82
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.57
103 TestFunctional/parallel/MySQL 27.31
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.41
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 0.16
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
121 TestFunctional/parallel/ImageCommands/Setup 0.45
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.93
133 TestFunctional/parallel/ProfileCmd/profile_list 0.36
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
135 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.3
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.4
138 TestFunctional/parallel/ServiceCmd/List 0.25
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
144 TestFunctional/parallel/ServiceCmd/Format 0.29
145 TestFunctional/parallel/ServiceCmd/URL 0.33
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.24
147 TestFunctional/parallel/MountCmd/any-port 22.73
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
151 TestFunctional/parallel/MountCmd/specific-port 1.91
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 198.37
160 TestMultiControlPlane/serial/DeployApp 5.88
161 TestMultiControlPlane/serial/PingHostFromPods 1.19
162 TestMultiControlPlane/serial/AddWorkerNode 55.85
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
165 TestMultiControlPlane/serial/CopyFile 12.77
166 TestMultiControlPlane/serial/StopSecondaryNode 91.48
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
168 TestMultiControlPlane/serial/RestartSecondaryNode 70.18
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 442.74
171 TestMultiControlPlane/serial/DeleteSecondaryNode 17.99
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
173 TestMultiControlPlane/serial/StopCluster 272.49
174 TestMultiControlPlane/serial/RestartCluster 124.15
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 75.48
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 83.88
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.7
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.58
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.36
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.61
213 TestMountStart/serial/StartWithMountFirst 26.47
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 29.38
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.55
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 21.71
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 119.22
225 TestMultiNode/serial/DeployApp2Nodes 4.56
226 TestMultiNode/serial/PingHostFrom2Pods 0.78
227 TestMultiNode/serial/AddNode 51.73
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.55
230 TestMultiNode/serial/CopyFile 7.03
231 TestMultiNode/serial/StopNode 2.23
232 TestMultiNode/serial/StartAfterStop 37.66
233 TestMultiNode/serial/RestartKeepsNodes 335.99
234 TestMultiNode/serial/DeleteNode 2.56
235 TestMultiNode/serial/StopMultiNode 181.38
236 TestMultiNode/serial/RestartMultiNode 111.84
237 TestMultiNode/serial/ValidateNameConflict 44.73
244 TestScheduledStopUnix 113.92
248 TestRunningBinaryUpgrade 214.45
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 89.87
255 TestNoKubernetes/serial/StartWithStopK8s 64.84
256 TestNoKubernetes/serial/Start 47.04
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 12.08
259 TestNoKubernetes/serial/Stop 1.29
270 TestNoKubernetes/serial/StartNoArgs 29.23
275 TestNetworkPlugins/group/false 2.84
279 TestStoppedBinaryUpgrade/Setup 0.44
280 TestStoppedBinaryUpgrade/Upgrade 116.37
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
283 TestPause/serial/Start 108.46
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
289 TestStartStop/group/no-preload/serial/FirstStart 107.25
291 TestStartStop/group/embed-certs/serial/FirstStart 54.27
292 TestStartStop/group/embed-certs/serial/DeployApp 10.29
293 TestStartStop/group/no-preload/serial/DeployApp 9.28
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
295 TestStartStop/group/embed-certs/serial/Stop 90.89
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
297 TestStartStop/group/no-preload/serial/Stop 90.94
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/no-preload/serial/SecondStart 329.43
304 TestStartStop/group/old-k8s-version/serial/Stop 1.34
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
310 TestStartStop/group/no-preload/serial/Pause 2.63
312 TestStartStop/group/newest-cni/serial/FirstStart 54.24
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
315 TestStartStop/group/newest-cni/serial/Stop 7.32
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/newest-cni/serial/SecondStart 71.08
318 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
320 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
321 TestStartStop/group/newest-cni/serial/Pause 2.34
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.91
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.88
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.03
330 TestNetworkPlugins/group/auto/Start 54.07
331 TestNetworkPlugins/group/auto/KubeletFlags 0.21
332 TestNetworkPlugins/group/auto/NetCatPod 11.22
333 TestNetworkPlugins/group/auto/DNS 0.15
334 TestNetworkPlugins/group/auto/Localhost 0.12
335 TestNetworkPlugins/group/auto/HairPin 0.11
336 TestNetworkPlugins/group/kindnet/Start 62.38
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
339 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
340 TestNetworkPlugins/group/kindnet/DNS 0.15
341 TestNetworkPlugins/group/kindnet/Localhost 0.13
342 TestNetworkPlugins/group/kindnet/HairPin 0.11
343 TestNetworkPlugins/group/calico/Start 76.41
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.77
348 TestNetworkPlugins/group/custom-flannel/Start 66.79
349 TestNetworkPlugins/group/calico/ControllerPod 6.01
350 TestNetworkPlugins/group/calico/KubeletFlags 0.22
351 TestNetworkPlugins/group/calico/NetCatPod 10.24
352 TestNetworkPlugins/group/calico/DNS 0.17
353 TestNetworkPlugins/group/calico/Localhost 0.12
354 TestNetworkPlugins/group/calico/HairPin 0.14
355 TestNetworkPlugins/group/enable-default-cni/Start 58.31
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.25
358 TestNetworkPlugins/group/custom-flannel/DNS 0.15
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
360 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
361 TestNetworkPlugins/group/flannel/Start 84.05
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
364 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
367 TestNetworkPlugins/group/bridge/Start 56.53
368 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
371 TestNetworkPlugins/group/flannel/NetCatPod 11.25
372 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
373 TestNetworkPlugins/group/flannel/DNS 0.16
374 TestNetworkPlugins/group/flannel/Localhost 0.13
375 TestNetworkPlugins/group/bridge/NetCatPod 9.23
376 TestNetworkPlugins/group/flannel/HairPin 0.14
377 TestNetworkPlugins/group/bridge/DNS 0.15
378 TestNetworkPlugins/group/bridge/Localhost 0.13
379 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-343942 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-343942 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.355269383s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 13:03:18.486110  562636 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 13:03:18.486284  562636 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-343942
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-343942: exit status 85 (61.617123ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-343942 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |          |
	|         | -p download-only-343942        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:03:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:03:11.172633  562648 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:03:11.172725  562648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:11.172733  562648 out.go:358] Setting ErrFile to fd 2...
	I0127 13:03:11.172737  562648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:11.172898  562648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	W0127 13:03:11.173018  562648 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20327-555419/.minikube/config/config.json: open /home/jenkins/minikube-integration/20327-555419/.minikube/config/config.json: no such file or directory
	I0127 13:03:11.173554  562648 out.go:352] Setting JSON to true
	I0127 13:03:11.174459  562648 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13536,"bootTime":1737969455,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:03:11.174557  562648 start.go:139] virtualization: kvm guest
	I0127 13:03:11.176666  562648 out.go:97] [download-only-343942] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:03:11.176811  562648 notify.go:220] Checking for updates...
	W0127 13:03:11.176806  562648 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 13:03:11.177924  562648 out.go:169] MINIKUBE_LOCATION=20327
	I0127 13:03:11.179141  562648 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:03:11.180252  562648 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:03:11.181303  562648 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:03:11.182416  562648 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 13:03:11.184320  562648 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 13:03:11.184502  562648 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:03:11.218391  562648 out.go:97] Using the kvm2 driver based on user configuration
	I0127 13:03:11.218412  562648 start.go:297] selected driver: kvm2
	I0127 13:03:11.218418  562648 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:03:11.218713  562648 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:03:11.218818  562648 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20327-555419/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:03:11.232869  562648 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:03:11.232933  562648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:03:11.233431  562648 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 13:03:11.233548  562648 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:03:11.233590  562648 cni.go:84] Creating CNI manager for ""
	I0127 13:03:11.233663  562648 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:03:11.233673  562648 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:03:11.233716  562648 start.go:340] cluster config:
	{Name:download-only-343942 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-343942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:03:11.233879  562648 iso.go:125] acquiring lock: {Name:mk0b06c73eff2439d8011e2d265689c91f6582e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:03:11.235264  562648 out.go:97] Downloading VM boot image ...
	I0127 13:03:11.235292  562648 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:03:13.872896  562648 out.go:97] Starting "download-only-343942" primary control-plane node in "download-only-343942" cluster
	I0127 13:03:13.872927  562648 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:03:13.892094  562648 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 13:03:13.892138  562648 cache.go:56] Caching tarball of preloaded images
	I0127 13:03:13.892305  562648 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:03:13.897368  562648 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 13:03:13.897404  562648 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:03:13.921208  562648 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-343942 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343942"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-343942
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-540094 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-540094 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.885741046s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 13:03:22.690651  562636 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 13:03:22.690706  562636 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20327-555419/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-540094
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-540094: exit status 85 (59.552907ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-343942 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	|         | -p download-only-343942        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| delete  | -p download-only-343942        | download-only-343942 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	| start   | -o=json --download-only        | download-only-540094 | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	|         | -p download-only-540094        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:03:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:03:18.848140  562842 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:03:18.848259  562842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:18.848269  562842 out.go:358] Setting ErrFile to fd 2...
	I0127 13:03:18.848274  562842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:03:18.848491  562842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:03:18.849057  562842 out.go:352] Setting JSON to true
	I0127 13:03:18.850013  562842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13544,"bootTime":1737969455,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:03:18.850179  562842 start.go:139] virtualization: kvm guest
	I0127 13:03:18.851804  562842 out.go:97] [download-only-540094] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:03:18.851932  562842 notify.go:220] Checking for updates...
	I0127 13:03:18.852913  562842 out.go:169] MINIKUBE_LOCATION=20327
	I0127 13:03:18.854140  562842 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:03:18.855271  562842 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:03:18.856465  562842 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:03:18.857480  562842 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-540094 host does not exist
	  To start a cluster, run: "minikube start -p download-only-540094"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-540094
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 13:03:23.249468  562636 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-945232 --alsologtostderr --binary-mirror http://127.0.0.1:34777 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-945232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-945232
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (83.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-385307 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-385307 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.469140133s)
helpers_test.go:175: Cleaning up "offline-crio-385307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-385307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-385307: (1.210284615s)
--- PASS: TestOffline (83.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-293977
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-293977: exit status 85 (51.034ms)

                                                
                                                
-- stdout --
	* Profile "addons-293977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-293977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-293977
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-293977: exit status 85 (52.534172ms)

                                                
                                                
-- stdout --
	* Profile "addons-293977" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-293977"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-293977 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-293977 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.907084171s)
--- PASS: TestAddons/Setup (129.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-293977 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-293977 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-293977 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-293977 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1608708-17ef-4056-9d18-636402de8414] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1608708-17ef-4056-9d18-636402de8414] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004162889s
addons_test.go:633: (dbg) Run:  kubectl --context addons-293977 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-293977 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-293977 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.984446ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-6k6d6" [a783b125-0ec9-4bd0-bb67-3c277fbbe585] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004293431s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bf7ln" [5f7ccfa7-9c68-43e2-8b05-519e511c9924] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003409882s
addons_test.go:331: (dbg) Run:  kubectl --context addons-293977 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-293977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-293977 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.793755017s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 ip
2025/01/27 13:06:10 [DEBUG] GET http://192.168.39.12:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4542z" [5f3245f0-d72d-49ae-8190-f090d1c7f470] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003838398s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable inspektor-gadget --alsologtostderr -v=1: (6.011066685s)
--- PASS: TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.328153ms
I0127 13:05:51.090312  562636 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 13:05:51.090339  562636 kapi.go:107] duration metric: took 6.824566ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-zqplg" [7bf6ee82-ae62-490e-8bc8-7ec6dd29d885] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002846665s
addons_test.go:402: (dbg) Run:  kubectl --context addons-293977 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 13:05:51.083547  562636 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.837223ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-293977 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-293977 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [de6adca6-d21b-4efe-b833-f103f600da80] Pending
helpers_test.go:344: "task-pv-pod" [de6adca6-d21b-4efe-b833-f103f600da80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [de6adca6-d21b-4efe-b833-f103f600da80] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.00344595s
addons_test.go:511: (dbg) Run:  kubectl --context addons-293977 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-293977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-293977 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-293977 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-293977 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-293977 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-293977 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [01a3dbe9-a3e0-4566-9bcc-e5076013f3f2] Pending
helpers_test.go:344: "task-pv-pod-restore" [01a3dbe9-a3e0-4566-9bcc-e5076013f3f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [01a3dbe9-a3e0-4566-9bcc-e5076013f3f2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004206298s
addons_test.go:553: (dbg) Run:  kubectl --context addons-293977 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-293977 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-293977 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823146022s)
--- PASS: TestAddons/parallel/CSI (60.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-293977 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-m6sn8" [1f597b7f-b308-4548-9dd2-6cb3e7e8db23] Pending
helpers_test.go:344: "headlamp-69d78d796f-m6sn8" [1f597b7f-b308-4548-9dd2-6cb3e7e8db23] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-m6sn8" [1f597b7f-b308-4548-9dd2-6cb3e7e8db23] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.339640219s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable headlamp --alsologtostderr -v=1: (6.073005628s)
--- PASS: TestAddons/parallel/Headlamp (20.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-ht8ml" [87892fb1-57fc-4ee5-8eb3-dd514df13a42] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004351292s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-293977 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-293977 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fe4eab27-d692-416f-9d61-685b3bb93041] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fe4eab27-d692-416f-9d61-685b3bb93041] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fe4eab27-d692-416f-9d61-685b3bb93041] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004558149s
addons_test.go:906: (dbg) Run:  kubectl --context addons-293977 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 ssh "cat /opt/local-path-provisioner/pvc-cadd48f3-e676-45ff-bd54-b2a580221202_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-293977 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-293977 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.192383803s)
--- PASS: TestAddons/parallel/LocalPath (53.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vf7zd" [610bfd51-5c3c-4482-87c1-ef8a1006a42d] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004631521s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-gpk7n" [ebfd79c1-98e9-446e-a3f8-10cbee8d9b17] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003139536s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-293977 addons disable yakd --alsologtostderr -v=1: (6.155428285s)
--- PASS: TestAddons/parallel/Yakd (12.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-293977
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-293977: (1m30.834813435s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-293977
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-293977
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-293977
--- PASS: TestAddons/StoppedEnableDisable (91.11s)

                                                
                                    
x
+
TestCertOptions (60.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-462765 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-462765 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.251038771s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-462765 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-462765 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-462765 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-462765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-462765
--- PASS: TestCertOptions (60.36s)

                                                
                                    
x
+
TestCertExpiration (290.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-335486 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-335486 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m17.945202666s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-335486 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-335486 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.09134549s)
helpers_test.go:175: Cleaning up "cert-expiration-335486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-335486
--- PASS: TestCertExpiration (290.94s)

                                                
                                    
x
+
TestForceSystemdFlag (72.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-500518 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-500518 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.103895195s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-500518 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-500518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-500518
--- PASS: TestForceSystemdFlag (72.18s)

                                                
                                    
x
+
TestForceSystemdEnv (66.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-450776 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-450776 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.53962683s)
helpers_test.go:175: Cleaning up "force-systemd-env-450776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-450776
--- PASS: TestForceSystemdEnv (66.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 14:07:06.981541  562636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:07:06.981716  562636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 14:07:07.017285  562636 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 14:07:07.017659  562636 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 14:07:07.017733  562636 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2883508763/001/docker-machine-driver-kvm2
I0127 14:07:07.156934  562636 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2883508763/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0008197d0 gz:0xc0008197d8 tar:0xc000819780 tar.bz2:0xc000819790 tar.gz:0xc0008197a0 tar.xz:0xc0008197b0 tar.zst:0xc0008197c0 tbz2:0xc000819790 tgz:0xc0008197a0 txz:0xc0008197b0 tzst:0xc0008197c0 xz:0xc0008197e0 zip:0xc0008197f0 zst:0xc0008197e8] Getters:map[file:0xc0018d0290 http:0xc000074140 https:0xc000074190] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 14:07:07.156976  562636 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2883508763/001/docker-machine-driver-kvm2
I0127 14:07:07.809806  562636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 14:07:07.826365  562636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 14:07:07.858948  562636 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 14:07:07.858980  562636 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 14:07:07.859054  562636 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 14:07:07.859082  562636 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2883508763/002/docker-machine-driver-kvm2
I0127 14:07:07.882933  562636 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2883508763/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0008197d0 gz:0xc0008197d8 tar:0xc000819780 tar.bz2:0xc000819790 tar.gz:0xc0008197a0 tar.xz:0xc0008197b0 tar.zst:0xc0008197c0 tbz2:0xc000819790 tgz:0xc0008197a0 txz:0xc0008197b0 tzst:0xc0008197c0 xz:0xc0008197e0 zip:0xc0008197f0 zst:0xc0008197e8] Getters:map[file:0xc0013ecba0 http:0xc0007cf9a0 https:0xc0007cf9f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 14:07:07.882969  562636 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2883508763/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.34s)

                                                
                                    
x
+
TestErrorSpam/setup (40.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-652585 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-652585 --driver=kvm2  --container-runtime=crio
E0127 13:10:34.434682  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.443933  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.455177  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.476756  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.518105  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.599469  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:34.760928  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:35.082641  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:35.724751  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:37.006815  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:39.568686  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:44.690101  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:10:54.931938  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-652585 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-652585 --driver=kvm2  --container-runtime=crio: (40.385188284s)
--- PASS: TestErrorSpam/setup (40.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (4.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop: (2.309904647s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop: (1.21420459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-652585 --log_dir /tmp/nospam-652585 stop: (1.021611556s)
--- PASS: TestErrorSpam/stop (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20327-555419/.minikube/files/etc/test/nested/copy/562636/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0127 13:11:15.413263  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:11:56.376011  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-104449 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.161209289s)
--- PASS: TestFunctional/serial/StartWithProxy (57.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 13:12:02.459080  562636 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-104449 --alsologtostderr -v=8: (36.084591486s)
functional_test.go:663: soft start took 36.085231323s for "functional-104449" cluster.
I0127 13:12:38.544060  562636 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (36.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-104449 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:3.1: (1.04579695s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:3.3: (1.104289779s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 cache add registry.k8s.io/pause:latest: (1.101309986s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-104449 /tmp/TestFunctionalserialCacheCmdcacheadd_local3860232440/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache add minikube-local-cache-test:functional-104449
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache delete minikube-local-cache-test:functional-104449
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-104449
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.071629ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 kubectl -- --context functional-104449 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-104449 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 13:13:18.298132  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-104449 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.879723327s)
functional_test.go:761: restart took 35.879881301s for "functional-104449" cluster.
I0127 13:13:21.112913  562636 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-104449 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 logs: (1.431447638s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 logs --file /tmp/TestFunctionalserialLogsFileCmd2666065541/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 logs --file /tmp/TestFunctionalserialLogsFileCmd2666065541/001/logs.txt: (1.347603759s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-104449 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-104449
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-104449: exit status 115 (263.650691ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.7:32687 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-104449 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 config get cpus: exit status 14 (50.474719ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 config get cpus: exit status 14 (54.322395ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-104449 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-104449 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 570699: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-104449 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.596485ms)

                                                
                                                
-- stdout --
	* [functional-104449] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:49.546976  570583 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:49.547118  570583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:49.547130  570583 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:49.547137  570583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:49.547316  570583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:13:49.547861  570583 out.go:352] Setting JSON to false
	I0127 13:13:49.548919  570583 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14174,"bootTime":1737969455,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:13:49.549033  570583 start.go:139] virtualization: kvm guest
	I0127 13:13:49.550721  570583 out.go:177] * [functional-104449] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:13:49.552115  570583 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:13:49.552139  570583 notify.go:220] Checking for updates...
	I0127 13:13:49.554202  570583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:49.555377  570583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:13:49.556778  570583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:13:49.558037  570583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:13:49.559233  570583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:49.560966  570583 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:13:49.561547  570583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:13:49.561643  570583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:49.577733  570583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I0127 13:13:49.578136  570583 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:49.578647  570583 main.go:141] libmachine: Using API Version  1
	I0127 13:13:49.578671  570583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:49.578976  570583 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:49.579187  570583 main.go:141] libmachine: (functional-104449) Calling .DriverName
	I0127 13:13:49.579433  570583 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:49.579718  570583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:13:49.579783  570583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:49.596643  570583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I0127 13:13:49.597011  570583 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:49.597462  570583 main.go:141] libmachine: Using API Version  1
	I0127 13:13:49.597482  570583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:49.597845  570583 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:49.598108  570583 main.go:141] libmachine: (functional-104449) Calling .DriverName
	I0127 13:13:49.630767  570583 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:13:49.631813  570583 start.go:297] selected driver: kvm2
	I0127 13:13:49.631829  570583 start.go:901] validating driver "kvm2" against &{Name:functional-104449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-104449 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:13:49.631943  570583 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:49.633722  570583 out.go:201] 
	W0127 13:13:49.634710  570583 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 13:13:49.635660  570583 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104449 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-104449 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.834309ms)

                                                
                                                
-- stdout --
	* [functional-104449] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:43.568596  570150 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:43.568711  570150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:43.568717  570150 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:43.568722  570150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:43.568986  570150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:13:43.569494  570150 out.go:352] Setting JSON to false
	I0127 13:13:43.570459  570150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14168,"bootTime":1737969455,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:13:43.570582  570150 start.go:139] virtualization: kvm guest
	I0127 13:13:43.572532  570150 out.go:177] * [functional-104449] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 13:13:43.573862  570150 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 13:13:43.573875  570150 notify.go:220] Checking for updates...
	I0127 13:13:43.576015  570150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:43.577177  570150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 13:13:43.578243  570150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 13:13:43.579261  570150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:13:43.580363  570150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:43.581728  570150 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:13:43.582144  570150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:13:43.582193  570150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:43.599161  570150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0127 13:13:43.599672  570150 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:43.600272  570150 main.go:141] libmachine: Using API Version  1
	I0127 13:13:43.600305  570150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:43.600656  570150 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:43.600907  570150 main.go:141] libmachine: (functional-104449) Calling .DriverName
	I0127 13:13:43.601172  570150 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:43.601522  570150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:13:43.601613  570150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:13:43.616923  570150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0127 13:13:43.617336  570150 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:13:43.617948  570150 main.go:141] libmachine: Using API Version  1
	I0127 13:13:43.617974  570150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:13:43.618290  570150 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:13:43.618577  570150 main.go:141] libmachine: (functional-104449) Calling .DriverName
	I0127 13:13:43.652677  570150 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 13:13:43.653695  570150 start.go:297] selected driver: kvm2
	I0127 13:13:43.653709  570150 start.go:901] validating driver "kvm2" against &{Name:functional-104449 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-104449 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:13:43.653810  570150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:43.655573  570150 out.go:201] 
	W0127 13:13:43.656607  570150 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 13:13:43.657626  570150 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-104449 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-104449 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xlmn5" [34d92a98-df30-454d-abd4-9f1d99664829] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-xlmn5" [34d92a98-df30-454d-abd4-9f1d99664829] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003452352s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.7:31047
functional_test.go:1675: http://192.168.50.7:31047: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-xlmn5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.7:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.7:31047
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0e20b992-09ee-4ca0-a38d-046ef1163bfd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00386867s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-104449 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-104449 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-104449 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-104449 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee5f84dd-2197-4e40-9c0d-d84328abbaf6] Pending
helpers_test.go:344: "sp-pod" [ee5f84dd-2197-4e40-9c0d-d84328abbaf6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee5f84dd-2197-4e40-9c0d-d84328abbaf6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.01506506s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-104449 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-104449 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-104449 delete -f testdata/storage-provisioner/pod.yaml: (3.826264149s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-104449 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eafae810-9791-422d-a884-316a7a51ae08] Pending
helpers_test.go:344: "sp-pod" [eafae810-9791-422d-a884-316a7a51ae08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eafae810-9791-422d-a884-316a7a51ae08] Running
2025/01/27 13:14:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.003595305s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-104449 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh -n functional-104449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cp functional-104449:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd551738698/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh -n functional-104449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh -n functional-104449 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-104449 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-t4d4x" [9fea9c5a-7761-4c40-939e-b13f89f92491] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-t4d4x" [9fea9c5a-7761-4c40-939e-b13f89f92491] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004638155s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-104449 exec mysql-58ccfd96bb-t4d4x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-104449 exec mysql-58ccfd96bb-t4d4x -- mysql -ppassword -e "show databases;": exit status 1 (165.738625ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 13:14:07.798247  562636 retry.go:31] will retry after 803.039243ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-104449 exec mysql-58ccfd96bb-t4d4x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/562636/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /etc/test/nested/copy/562636/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/562636.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /etc/ssl/certs/562636.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/562636.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /usr/share/ca-certificates/562636.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5626362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /etc/ssl/certs/5626362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5626362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /usr/share/ca-certificates/5626362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-104449 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "sudo systemctl is-active docker": exit status 1 (236.689063ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "sudo systemctl is-active containerd": exit status 1 (211.721026ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104449 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-104449
localhost/kicbase/echo-server:functional-104449
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104449 image ls --format short --alsologtostderr:
I0127 13:14:09.538596  571340 out.go:345] Setting OutFile to fd 1 ...
I0127 13:14:09.538840  571340 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.538851  571340 out.go:358] Setting ErrFile to fd 2...
I0127 13:14:09.538858  571340 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.539452  571340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
I0127 13:14:09.540371  571340 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.540530  571340 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.541126  571340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.541191  571340 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.557876  571340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
I0127 13:14:09.559234  571340 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.559986  571340 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.560012  571340 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.560538  571340 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.560750  571340 main.go:141] libmachine: (functional-104449) Calling .GetState
I0127 13:14:09.562781  571340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.562821  571340 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.580111  571340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
I0127 13:14:09.580633  571340 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.581098  571340 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.581125  571340 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.581481  571340 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.581675  571340 main.go:141] libmachine: (functional-104449) Calling .DriverName
I0127 13:14:09.581846  571340 ssh_runner.go:195] Run: systemctl --version
I0127 13:14:09.581868  571340 main.go:141] libmachine: (functional-104449) Calling .GetSSHHostname
I0127 13:14:09.584658  571340 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.585064  571340 main.go:141] libmachine: (functional-104449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:28:b0", ip: ""} in network mk-functional-104449: {Iface:virbr1 ExpiryTime:2025-01-27 14:11:19 +0000 UTC Type:0 Mac:52:54:00:f3:28:b0 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-104449 Clientid:01:52:54:00:f3:28:b0}
I0127 13:14:09.585097  571340 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined IP address 192.168.50.7 and MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.585321  571340 main.go:141] libmachine: (functional-104449) Calling .GetSSHPort
I0127 13:14:09.585483  571340 main.go:141] libmachine: (functional-104449) Calling .GetSSHKeyPath
I0127 13:14:09.585686  571340 main.go:141] libmachine: (functional-104449) Calling .GetSSHUsername
I0127 13:14:09.585844  571340 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/functional-104449/id_rsa Username:docker}
I0127 13:14:09.672063  571340 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:14:09.713728  571340 main.go:141] libmachine: Making call to close driver server
I0127 13:14:09.713741  571340 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:09.714000  571340 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:09.714023  571340 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:09.714038  571340 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:14:09.714048  571340 main.go:141] libmachine: Making call to close driver server
I0127 13:14:09.714062  571340 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:09.714359  571340 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:09.714400  571340 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:09.714424  571340 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104449 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-104449  | 6d6b27e1c324d | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| localhost/kicbase/echo-server           | functional-104449  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104449 image ls --format table --alsologtostderr:
I0127 13:14:10.065635  571493 out.go:345] Setting OutFile to fd 1 ...
I0127 13:14:10.065745  571493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:10.065754  571493 out.go:358] Setting ErrFile to fd 2...
I0127 13:14:10.065759  571493 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:10.065935  571493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
I0127 13:14:10.066570  571493 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:10.066692  571493 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:10.067076  571493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:10.067152  571493 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:10.082808  571493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
I0127 13:14:10.083247  571493 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:10.083816  571493 main.go:141] libmachine: Using API Version  1
I0127 13:14:10.083843  571493 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:10.084157  571493 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:10.084380  571493 main.go:141] libmachine: (functional-104449) Calling .GetState
I0127 13:14:10.085969  571493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:10.086010  571493 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:10.100532  571493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
I0127 13:14:10.101004  571493 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:10.101494  571493 main.go:141] libmachine: Using API Version  1
I0127 13:14:10.101511  571493 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:10.101857  571493 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:10.102081  571493 main.go:141] libmachine: (functional-104449) Calling .DriverName
I0127 13:14:10.102257  571493 ssh_runner.go:195] Run: systemctl --version
I0127 13:14:10.102295  571493 main.go:141] libmachine: (functional-104449) Calling .GetSSHHostname
I0127 13:14:10.104893  571493 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:10.105347  571493 main.go:141] libmachine: (functional-104449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:28:b0", ip: ""} in network mk-functional-104449: {Iface:virbr1 ExpiryTime:2025-01-27 14:11:19 +0000 UTC Type:0 Mac:52:54:00:f3:28:b0 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-104449 Clientid:01:52:54:00:f3:28:b0}
I0127 13:14:10.105375  571493 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined IP address 192.168.50.7 and MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:10.105505  571493 main.go:141] libmachine: (functional-104449) Calling .GetSSHPort
I0127 13:14:10.105671  571493 main.go:141] libmachine: (functional-104449) Calling .GetSSHKeyPath
I0127 13:14:10.105842  571493 main.go:141] libmachine: (functional-104449) Calling .GetSSHUsername
I0127 13:14:10.105974  571493 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/functional-104449/id_rsa Username:docker}
I0127 13:14:10.183218  571493 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:14:10.250337  571493 main.go:141] libmachine: Making call to close driver server
I0127 13:14:10.250361  571493 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:10.250724  571493 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:10.250728  571493 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:10.250768  571493 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:14:10.250778  571493 main.go:141] libmachine: Making call to close driver server
I0127 13:14:10.250786  571493 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:10.251050  571493 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:10.251074  571493 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104449 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514ee
b0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.
k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha
256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-104449"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0
d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.i
o/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6d6b27e1c324ddd7f4e8c80839e1859cfa61f48c13dc9ed258873604ed2a24ed","repoDigests":["localhost/minikube-local-cache-test@sha256:156564eeac0d78204cca05d244fd6225d3b82792c5c576e22b0abebc1a3dcc7c"],"repoTags":["localhost/minikube-local-cache-test:functional-104449"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registr
y.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104449 image ls --format json --alsologtostderr:
I0127 13:14:09.841667  571435 out.go:345] Setting OutFile to fd 1 ...
I0127 13:14:09.842184  571435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.842202  571435 out.go:358] Setting ErrFile to fd 2...
I0127 13:14:09.842209  571435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.842687  571435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
I0127 13:14:09.843798  571435 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.843929  571435 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.844338  571435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.844385  571435 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.862053  571435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
I0127 13:14:09.862504  571435 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.862970  571435 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.863000  571435 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.863366  571435 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.863540  571435 main.go:141] libmachine: (functional-104449) Calling .GetState
I0127 13:14:09.865467  571435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.865511  571435 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.880330  571435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
I0127 13:14:09.880659  571435 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.881088  571435 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.881126  571435 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.881474  571435 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.881682  571435 main.go:141] libmachine: (functional-104449) Calling .DriverName
I0127 13:14:09.881856  571435 ssh_runner.go:195] Run: systemctl --version
I0127 13:14:09.881911  571435 main.go:141] libmachine: (functional-104449) Calling .GetSSHHostname
I0127 13:14:09.884206  571435 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.884561  571435 main.go:141] libmachine: (functional-104449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:28:b0", ip: ""} in network mk-functional-104449: {Iface:virbr1 ExpiryTime:2025-01-27 14:11:19 +0000 UTC Type:0 Mac:52:54:00:f3:28:b0 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-104449 Clientid:01:52:54:00:f3:28:b0}
I0127 13:14:09.884590  571435 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined IP address 192.168.50.7 and MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.884717  571435 main.go:141] libmachine: (functional-104449) Calling .GetSSHPort
I0127 13:14:09.884935  571435 main.go:141] libmachine: (functional-104449) Calling .GetSSHKeyPath
I0127 13:14:09.885082  571435 main.go:141] libmachine: (functional-104449) Calling .GetSSHUsername
I0127 13:14:09.885214  571435 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/functional-104449/id_rsa Username:docker}
I0127 13:14:09.965662  571435 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:14:10.011668  571435 main.go:141] libmachine: Making call to close driver server
I0127 13:14:10.011678  571435 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:10.011967  571435 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:10.011983  571435 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:10.012022  571435 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:14:10.012035  571435 main.go:141] libmachine: Making call to close driver server
I0127 13:14:10.012045  571435 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:10.012509  571435 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:10.012527  571435 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:10.012551  571435 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104449 image ls --format yaml --alsologtostderr:
- id: 6d6b27e1c324ddd7f4e8c80839e1859cfa61f48c13dc9ed258873604ed2a24ed
repoDigests:
- localhost/minikube-local-cache-test@sha256:156564eeac0d78204cca05d244fd6225d3b82792c5c576e22b0abebc1a3dcc7c
repoTags:
- localhost/minikube-local-cache-test:functional-104449
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-104449
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104449 image ls --format yaml --alsologtostderr:
I0127 13:14:09.619034  571371 out.go:345] Setting OutFile to fd 1 ...
I0127 13:14:09.619128  571371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.619136  571371 out.go:358] Setting ErrFile to fd 2...
I0127 13:14:09.619140  571371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.619293  571371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
I0127 13:14:09.619857  571371 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.619955  571371 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.620275  571371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.620318  571371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.636204  571371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
I0127 13:14:09.636678  571371 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.637264  571371 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.637292  571371 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.637655  571371 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.637830  571371 main.go:141] libmachine: (functional-104449) Calling .GetState
I0127 13:14:09.639476  571371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.639513  571371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.654441  571371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
I0127 13:14:09.654818  571371 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.655271  571371 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.655301  571371 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.655598  571371 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.655769  571371 main.go:141] libmachine: (functional-104449) Calling .DriverName
I0127 13:14:09.655934  571371 ssh_runner.go:195] Run: systemctl --version
I0127 13:14:09.655958  571371 main.go:141] libmachine: (functional-104449) Calling .GetSSHHostname
I0127 13:14:09.658442  571371 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.658844  571371 main.go:141] libmachine: (functional-104449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:28:b0", ip: ""} in network mk-functional-104449: {Iface:virbr1 ExpiryTime:2025-01-27 14:11:19 +0000 UTC Type:0 Mac:52:54:00:f3:28:b0 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-104449 Clientid:01:52:54:00:f3:28:b0}
I0127 13:14:09.658873  571371 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined IP address 192.168.50.7 and MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:09.658985  571371 main.go:141] libmachine: (functional-104449) Calling .GetSSHPort
I0127 13:14:09.659155  571371 main.go:141] libmachine: (functional-104449) Calling .GetSSHKeyPath
I0127 13:14:09.659295  571371 main.go:141] libmachine: (functional-104449) Calling .GetSSHUsername
I0127 13:14:09.659454  571371 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/functional-104449/id_rsa Username:docker}
I0127 13:14:09.744397  571371 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 13:14:09.783964  571371 main.go:141] libmachine: Making call to close driver server
I0127 13:14:09.783981  571371 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:09.784280  571371 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:09.784305  571371 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:14:09.784323  571371 main.go:141] libmachine: Making call to close driver server
I0127 13:14:09.784333  571371 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:09.784570  571371 main.go:141] libmachine: (functional-104449) DBG | Closing plugin on server side
I0127 13:14:09.784597  571371 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:09.784613  571371 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh pgrep buildkitd: exit status 1 (208.300941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image build -t localhost/my-image:functional-104449 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 image build -t localhost/my-image:functional-104449 testdata/build --alsologtostderr: (2.535447203s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104449 image build -t localhost/my-image:functional-104449 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2f721621a25
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-104449
--> f4a2e700d6e
Successfully tagged localhost/my-image:functional-104449
f4a2e700d6e4835d95c41b7e630ce7f0b81162c0f4b7f7232993617d256ca616
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104449 image build -t localhost/my-image:functional-104449 testdata/build --alsologtostderr:
I0127 13:14:09.977442  571470 out.go:345] Setting OutFile to fd 1 ...
I0127 13:14:09.977573  571470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.977606  571470 out.go:358] Setting ErrFile to fd 2...
I0127 13:14:09.977614  571470 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 13:14:09.977915  571470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
I0127 13:14:09.978805  571470 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.979385  571470 config.go:182] Loaded profile config "functional-104449": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 13:14:09.979728  571470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.979761  571470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:09.995162  571470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
I0127 13:14:09.995673  571470 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:09.996248  571470 main.go:141] libmachine: Using API Version  1
I0127 13:14:09.996272  571470 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:09.996669  571470 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:09.996912  571470 main.go:141] libmachine: (functional-104449) Calling .GetState
I0127 13:14:09.998797  571470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 13:14:09.998837  571470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 13:14:10.014375  571470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
I0127 13:14:10.014801  571470 main.go:141] libmachine: () Calling .GetVersion
I0127 13:14:10.015308  571470 main.go:141] libmachine: Using API Version  1
I0127 13:14:10.015333  571470 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 13:14:10.015694  571470 main.go:141] libmachine: () Calling .GetMachineName
I0127 13:14:10.015899  571470 main.go:141] libmachine: (functional-104449) Calling .DriverName
I0127 13:14:10.016140  571470 ssh_runner.go:195] Run: systemctl --version
I0127 13:14:10.016173  571470 main.go:141] libmachine: (functional-104449) Calling .GetSSHHostname
I0127 13:14:10.019066  571470 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:10.019495  571470 main.go:141] libmachine: (functional-104449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:28:b0", ip: ""} in network mk-functional-104449: {Iface:virbr1 ExpiryTime:2025-01-27 14:11:19 +0000 UTC Type:0 Mac:52:54:00:f3:28:b0 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-104449 Clientid:01:52:54:00:f3:28:b0}
I0127 13:14:10.019530  571470 main.go:141] libmachine: (functional-104449) DBG | domain functional-104449 has defined IP address 192.168.50.7 and MAC address 52:54:00:f3:28:b0 in network mk-functional-104449
I0127 13:14:10.019623  571470 main.go:141] libmachine: (functional-104449) Calling .GetSSHPort
I0127 13:14:10.019818  571470 main.go:141] libmachine: (functional-104449) Calling .GetSSHKeyPath
I0127 13:14:10.019975  571470 main.go:141] libmachine: (functional-104449) Calling .GetSSHUsername
I0127 13:14:10.020116  571470 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/functional-104449/id_rsa Username:docker}
I0127 13:14:10.105241  571470 build_images.go:161] Building image from path: /tmp/build.541051008.tar
I0127 13:14:10.105300  571470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 13:14:10.117054  571470 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.541051008.tar
I0127 13:14:10.121767  571470 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.541051008.tar: stat -c "%s %y" /var/lib/minikube/build/build.541051008.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.541051008.tar': No such file or directory
I0127 13:14:10.121796  571470 ssh_runner.go:362] scp /tmp/build.541051008.tar --> /var/lib/minikube/build/build.541051008.tar (3072 bytes)
I0127 13:14:10.149665  571470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.541051008
I0127 13:14:10.159214  571470 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.541051008 -xf /var/lib/minikube/build/build.541051008.tar
I0127 13:14:10.168656  571470 crio.go:315] Building image: /var/lib/minikube/build/build.541051008
I0127 13:14:10.168711  571470 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-104449 /var/lib/minikube/build/build.541051008 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 13:14:12.432628  571470 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-104449 /var/lib/minikube/build/build.541051008 --cgroup-manager=cgroupfs: (2.263885009s)
I0127 13:14:12.432713  571470 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.541051008
I0127 13:14:12.448859  571470 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.541051008.tar
I0127 13:14:12.458717  571470 build_images.go:217] Built localhost/my-image:functional-104449 from /tmp/build.541051008.tar
I0127 13:14:12.458747  571470 build_images.go:133] succeeded building to: functional-104449
I0127 13:14:12.458755  571470 build_images.go:134] failed building to: 
I0127 13:14:12.458818  571470 main.go:141] libmachine: Making call to close driver server
I0127 13:14:12.458834  571470 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:12.459123  571470 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:12.459144  571470 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 13:14:12.459153  571470 main.go:141] libmachine: Making call to close driver server
I0127 13:14:12.459162  571470 main.go:141] libmachine: (functional-104449) Calling .Close
I0127 13:14:12.459420  571470 main.go:141] libmachine: Successfully made call to close driver server
I0127 13:14:12.459431  571470 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-104449
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image load --daemon kicbase/echo-server:functional-104449 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 image load --daemon kicbase/echo-server:functional-104449 --alsologtostderr: (1.554319029s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "305.79051ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.38637ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "358.515876ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "62.391323ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-104449 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-104449 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-cf2nj" [507fadde-213b-41d3-8c11-3cc5dad7d152] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-cf2nj" [507fadde-213b-41d3-8c11-3cc5dad7d152] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003775136s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image load --daemon kicbase/echo-server:functional-104449 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 image load --daemon kicbase/echo-server:functional-104449 --alsologtostderr: (2.030659269s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (9.218270807s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-104449
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image load --daemon kicbase/echo-server:functional-104449 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service list -o json
functional_test.go:1494: Took "252.666101ms" to run "out/minikube-linux-amd64 -p functional-104449 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.7:32352
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.7:32352
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image save kicbase/echo-server:functional-104449 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-104449 image save kicbase/echo-server:functional-104449 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.236735908s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdany-port1580468946/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737983623664173526" to /tmp/TestFunctionalparallelMountCmdany-port1580468946/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737983623664173526" to /tmp/TestFunctionalparallelMountCmdany-port1580468946/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737983623664173526" to /tmp/TestFunctionalparallelMountCmdany-port1580468946/001/test-1737983623664173526
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.79096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:13:43.881321  562636 retry.go:31] will retry after 288.841867ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 13:13 test-1737983623664173526
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh cat /mount-9p/test-1737983623664173526
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-104449 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [170696ea-9ce2-464c-8b25-1d6c50b87a16] Pending
helpers_test.go:344: "busybox-mount" [170696ea-9ce2-464c-8b25-1d6c50b87a16] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [170696ea-9ce2-464c-8b25-1d6c50b87a16] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [170696ea-9ce2-464c-8b25-1d6c50b87a16] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.003437359s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-104449 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdany-port1580468946/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image rm kicbase/echo-server:functional-104449 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-104449
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 image save --daemon kicbase/echo-server:functional-104449 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-104449
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdspecific-port1756618647/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.134846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:14:06.618604  562636 retry.go:31] will retry after 700.527964ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdspecific-port1756618647/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "sudo umount -f /mount-9p": exit status 1 (188.89337ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-104449 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdspecific-port1756618647/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T" /mount1: exit status 1 (233.196735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 13:14:08.537065  562636 retry.go:31] will retry after 402.757402ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104449 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-104449 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104449 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2781889842/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-104449
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-104449
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-104449
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-523095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 13:15:34.435444  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:16:02.141333  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-523095 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.703128219s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-523095 -- rollout status deployment/busybox: (3.81394416s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cf76t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cpf56 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-vpvsb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cf76t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cpf56 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-vpvsb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cf76t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cpf56 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-vpvsb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cf76t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cf76t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cpf56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-cpf56 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-vpvsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-523095 -- exec busybox-58667487b6-vpvsb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-523095 -v=7 --alsologtostderr
E0127 13:18:28.672658  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.679029  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.690401  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.711777  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.753152  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.834601  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:28.996146  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:29.318059  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:29.959719  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:31.241110  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:33.802522  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:18:38.924243  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-523095 -v=7 --alsologtostderr: (54.975132508s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-523095 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status --output json -v=7 --alsologtostderr
E0127 13:18:49.166586  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp testdata/cp-test.txt ha-523095:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040279602/001/cp-test_ha-523095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095:/home/docker/cp-test.txt ha-523095-m02:/home/docker/cp-test_ha-523095_ha-523095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test_ha-523095_ha-523095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095:/home/docker/cp-test.txt ha-523095-m03:/home/docker/cp-test_ha-523095_ha-523095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test_ha-523095_ha-523095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095:/home/docker/cp-test.txt ha-523095-m04:/home/docker/cp-test_ha-523095_ha-523095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test_ha-523095_ha-523095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp testdata/cp-test.txt ha-523095-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040279602/001/cp-test_ha-523095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m02:/home/docker/cp-test.txt ha-523095:/home/docker/cp-test_ha-523095-m02_ha-523095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test_ha-523095-m02_ha-523095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m02:/home/docker/cp-test.txt ha-523095-m03:/home/docker/cp-test_ha-523095-m02_ha-523095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test_ha-523095-m02_ha-523095-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m02:/home/docker/cp-test.txt ha-523095-m04:/home/docker/cp-test_ha-523095-m02_ha-523095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test_ha-523095-m02_ha-523095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp testdata/cp-test.txt ha-523095-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040279602/001/cp-test_ha-523095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m03:/home/docker/cp-test.txt ha-523095:/home/docker/cp-test_ha-523095-m03_ha-523095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test_ha-523095-m03_ha-523095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m03:/home/docker/cp-test.txt ha-523095-m02:/home/docker/cp-test_ha-523095-m03_ha-523095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test_ha-523095-m03_ha-523095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m03:/home/docker/cp-test.txt ha-523095-m04:/home/docker/cp-test_ha-523095-m03_ha-523095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test_ha-523095-m03_ha-523095-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp testdata/cp-test.txt ha-523095-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040279602/001/cp-test_ha-523095-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m04:/home/docker/cp-test.txt ha-523095:/home/docker/cp-test_ha-523095-m04_ha-523095.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095 "sudo cat /home/docker/cp-test_ha-523095-m04_ha-523095.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m04:/home/docker/cp-test.txt ha-523095-m02:/home/docker/cp-test_ha-523095-m04_ha-523095-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m02 "sudo cat /home/docker/cp-test_ha-523095-m04_ha-523095-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 cp ha-523095-m04:/home/docker/cp-test.txt ha-523095-m03:/home/docker/cp-test_ha-523095-m04_ha-523095-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 ssh -n ha-523095-m03 "sudo cat /home/docker/cp-test_ha-523095-m04_ha-523095-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 node stop m02 -v=7 --alsologtostderr
E0127 13:19:09.648603  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:19:50.610003  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-523095 node stop m02 -v=7 --alsologtostderr: (1m30.841750296s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr: exit status 7 (635.77674ms)

                                                
                                                
-- stdout --
	ha-523095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-523095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523095-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-523095-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:20:32.347715  576502 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:20:32.347833  576502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:32.347842  576502 out.go:358] Setting ErrFile to fd 2...
	I0127 13:20:32.347847  576502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:32.348024  576502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:20:32.348197  576502 out.go:352] Setting JSON to false
	I0127 13:20:32.348230  576502 mustload.go:65] Loading cluster: ha-523095
	I0127 13:20:32.348283  576502 notify.go:220] Checking for updates...
	I0127 13:20:32.348622  576502 config.go:182] Loaded profile config "ha-523095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:20:32.348642  576502 status.go:174] checking status of ha-523095 ...
	I0127 13:20:32.349041  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.349078  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.368470  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0127 13:20:32.368950  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.369565  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.369613  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.369972  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.370177  576502 main.go:141] libmachine: (ha-523095) Calling .GetState
	I0127 13:20:32.371794  576502 status.go:371] ha-523095 host status = "Running" (err=<nil>)
	I0127 13:20:32.371811  576502 host.go:66] Checking if "ha-523095" exists ...
	I0127 13:20:32.372094  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.372139  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.389255  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0127 13:20:32.389649  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.390099  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.390135  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.390452  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.390655  576502 main.go:141] libmachine: (ha-523095) Calling .GetIP
	I0127 13:20:32.393118  576502 main.go:141] libmachine: (ha-523095) DBG | domain ha-523095 has defined MAC address 52:54:00:6e:42:9c in network mk-ha-523095
	I0127 13:20:32.393552  576502 main.go:141] libmachine: (ha-523095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:42:9c", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:14:40 +0000 UTC Type:0 Mac:52:54:00:6e:42:9c Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ha-523095 Clientid:01:52:54:00:6e:42:9c}
	I0127 13:20:32.393599  576502 main.go:141] libmachine: (ha-523095) DBG | domain ha-523095 has defined IP address 192.168.39.88 and MAC address 52:54:00:6e:42:9c in network mk-ha-523095
	I0127 13:20:32.393746  576502 host.go:66] Checking if "ha-523095" exists ...
	I0127 13:20:32.394033  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.394095  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.409097  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0127 13:20:32.409543  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.410048  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.410069  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.410384  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.410571  576502 main.go:141] libmachine: (ha-523095) Calling .DriverName
	I0127 13:20:32.410732  576502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:32.410762  576502 main.go:141] libmachine: (ha-523095) Calling .GetSSHHostname
	I0127 13:20:32.413230  576502 main.go:141] libmachine: (ha-523095) DBG | domain ha-523095 has defined MAC address 52:54:00:6e:42:9c in network mk-ha-523095
	I0127 13:20:32.413729  576502 main.go:141] libmachine: (ha-523095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:42:9c", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:14:40 +0000 UTC Type:0 Mac:52:54:00:6e:42:9c Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ha-523095 Clientid:01:52:54:00:6e:42:9c}
	I0127 13:20:32.413756  576502 main.go:141] libmachine: (ha-523095) DBG | domain ha-523095 has defined IP address 192.168.39.88 and MAC address 52:54:00:6e:42:9c in network mk-ha-523095
	I0127 13:20:32.413959  576502 main.go:141] libmachine: (ha-523095) Calling .GetSSHPort
	I0127 13:20:32.414171  576502 main.go:141] libmachine: (ha-523095) Calling .GetSSHKeyPath
	I0127 13:20:32.414352  576502 main.go:141] libmachine: (ha-523095) Calling .GetSSHUsername
	I0127 13:20:32.414544  576502 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/ha-523095/id_rsa Username:docker}
	I0127 13:20:32.494565  576502 ssh_runner.go:195] Run: systemctl --version
	I0127 13:20:32.501313  576502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:32.516951  576502 kubeconfig.go:125] found "ha-523095" server: "https://192.168.39.254:8443"
	I0127 13:20:32.516991  576502 api_server.go:166] Checking apiserver status ...
	I0127 13:20:32.517030  576502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:32.532391  576502 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup
	W0127 13:20:32.542348  576502 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:20:32.542422  576502 ssh_runner.go:195] Run: ls
	I0127 13:20:32.546880  576502 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 13:20:32.551591  576502 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 13:20:32.551611  576502 status.go:463] ha-523095 apiserver status = Running (err=<nil>)
	I0127 13:20:32.551620  576502 status.go:176] ha-523095 status: &{Name:ha-523095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:32.551638  576502 status.go:174] checking status of ha-523095-m02 ...
	I0127 13:20:32.551966  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.552010  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.569727  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0127 13:20:32.570181  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.570650  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.570672  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.571024  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.571208  576502 main.go:141] libmachine: (ha-523095-m02) Calling .GetState
	I0127 13:20:32.572675  576502 status.go:371] ha-523095-m02 host status = "Stopped" (err=<nil>)
	I0127 13:20:32.572686  576502 status.go:384] host is not running, skipping remaining checks
	I0127 13:20:32.572691  576502 status.go:176] ha-523095-m02 status: &{Name:ha-523095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:32.572705  576502 status.go:174] checking status of ha-523095-m03 ...
	I0127 13:20:32.573071  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.573125  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.587456  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0127 13:20:32.587933  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.588403  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.588423  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.588716  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.588906  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetState
	I0127 13:20:32.590439  576502 status.go:371] ha-523095-m03 host status = "Running" (err=<nil>)
	I0127 13:20:32.590459  576502 host.go:66] Checking if "ha-523095-m03" exists ...
	I0127 13:20:32.590842  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.590889  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.604719  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0127 13:20:32.605174  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.605624  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.605646  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.605965  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.606129  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetIP
	I0127 13:20:32.608499  576502 main.go:141] libmachine: (ha-523095-m03) DBG | domain ha-523095-m03 has defined MAC address 52:54:00:d9:a2:b2 in network mk-ha-523095
	I0127 13:20:32.608893  576502 main.go:141] libmachine: (ha-523095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a2:b2", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:41 +0000 UTC Type:0 Mac:52:54:00:d9:a2:b2 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-523095-m03 Clientid:01:52:54:00:d9:a2:b2}
	I0127 13:20:32.608934  576502 main.go:141] libmachine: (ha-523095-m03) DBG | domain ha-523095-m03 has defined IP address 192.168.39.197 and MAC address 52:54:00:d9:a2:b2 in network mk-ha-523095
	I0127 13:20:32.609036  576502 host.go:66] Checking if "ha-523095-m03" exists ...
	I0127 13:20:32.609441  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.609484  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.624743  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0127 13:20:32.625134  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.625559  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.625590  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.625879  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.626051  576502 main.go:141] libmachine: (ha-523095-m03) Calling .DriverName
	I0127 13:20:32.626263  576502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:32.626288  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetSSHHostname
	I0127 13:20:32.628906  576502 main.go:141] libmachine: (ha-523095-m03) DBG | domain ha-523095-m03 has defined MAC address 52:54:00:d9:a2:b2 in network mk-ha-523095
	I0127 13:20:32.629316  576502 main.go:141] libmachine: (ha-523095-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:a2:b2", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:41 +0000 UTC Type:0 Mac:52:54:00:d9:a2:b2 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-523095-m03 Clientid:01:52:54:00:d9:a2:b2}
	I0127 13:20:32.629346  576502 main.go:141] libmachine: (ha-523095-m03) DBG | domain ha-523095-m03 has defined IP address 192.168.39.197 and MAC address 52:54:00:d9:a2:b2 in network mk-ha-523095
	I0127 13:20:32.629522  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetSSHPort
	I0127 13:20:32.629708  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetSSHKeyPath
	I0127 13:20:32.629862  576502 main.go:141] libmachine: (ha-523095-m03) Calling .GetSSHUsername
	I0127 13:20:32.629990  576502 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/ha-523095-m03/id_rsa Username:docker}
	I0127 13:20:32.714937  576502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:32.732087  576502 kubeconfig.go:125] found "ha-523095" server: "https://192.168.39.254:8443"
	I0127 13:20:32.732114  576502 api_server.go:166] Checking apiserver status ...
	I0127 13:20:32.732143  576502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:32.747810  576502 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	W0127 13:20:32.757466  576502 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:20:32.757506  576502 ssh_runner.go:195] Run: ls
	I0127 13:20:32.762086  576502 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 13:20:32.766476  576502 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 13:20:32.766500  576502 status.go:463] ha-523095-m03 apiserver status = Running (err=<nil>)
	I0127 13:20:32.766510  576502 status.go:176] ha-523095-m03 status: &{Name:ha-523095-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:20:32.766527  576502 status.go:174] checking status of ha-523095-m04 ...
	I0127 13:20:32.766798  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.766833  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.782940  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0127 13:20:32.783399  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.783873  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.783892  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.784257  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.784458  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetState
	I0127 13:20:32.786131  576502 status.go:371] ha-523095-m04 host status = "Running" (err=<nil>)
	I0127 13:20:32.786149  576502 host.go:66] Checking if "ha-523095-m04" exists ...
	I0127 13:20:32.786440  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.786485  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.803880  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0127 13:20:32.804319  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.804905  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.804925  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.805232  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.805415  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetIP
	I0127 13:20:32.808237  576502 main.go:141] libmachine: (ha-523095-m04) DBG | domain ha-523095-m04 has defined MAC address 52:54:00:62:ff:8e in network mk-ha-523095
	I0127 13:20:32.808682  576502 main.go:141] libmachine: (ha-523095-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ff:8e", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:18:06 +0000 UTC Type:0 Mac:52:54:00:62:ff:8e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-523095-m04 Clientid:01:52:54:00:62:ff:8e}
	I0127 13:20:32.808707  576502 main.go:141] libmachine: (ha-523095-m04) DBG | domain ha-523095-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:62:ff:8e in network mk-ha-523095
	I0127 13:20:32.808854  576502 host.go:66] Checking if "ha-523095-m04" exists ...
	I0127 13:20:32.809165  576502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:32.809204  576502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:32.823402  576502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0127 13:20:32.823802  576502 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:32.824216  576502 main.go:141] libmachine: Using API Version  1
	I0127 13:20:32.824235  576502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:32.824522  576502 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:32.824708  576502 main.go:141] libmachine: (ha-523095-m04) Calling .DriverName
	I0127 13:20:32.824866  576502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:20:32.824888  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetSSHHostname
	I0127 13:20:32.827341  576502 main.go:141] libmachine: (ha-523095-m04) DBG | domain ha-523095-m04 has defined MAC address 52:54:00:62:ff:8e in network mk-ha-523095
	I0127 13:20:32.827727  576502 main.go:141] libmachine: (ha-523095-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ff:8e", ip: ""} in network mk-ha-523095: {Iface:virbr1 ExpiryTime:2025-01-27 14:18:06 +0000 UTC Type:0 Mac:52:54:00:62:ff:8e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-523095-m04 Clientid:01:52:54:00:62:ff:8e}
	I0127 13:20:32.827753  576502 main.go:141] libmachine: (ha-523095-m04) DBG | domain ha-523095-m04 has defined IP address 192.168.39.137 and MAC address 52:54:00:62:ff:8e in network mk-ha-523095
	I0127 13:20:32.827888  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetSSHPort
	I0127 13:20:32.828101  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetSSHKeyPath
	I0127 13:20:32.828241  576502 main.go:141] libmachine: (ha-523095-m04) Calling .GetSSHUsername
	I0127 13:20:32.828367  576502 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/ha-523095-m04/id_rsa Username:docker}
	I0127 13:20:32.918029  576502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:20:32.932642  576502 status.go:176] ha-523095-m04 status: &{Name:ha-523095-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (70.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 node start m02 -v=7 --alsologtostderr
E0127 13:20:34.434951  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:21:12.532379  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-523095 node start m02 -v=7 --alsologtostderr: (1m9.284049423s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (70.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (442.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-523095 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-523095 -v=7 --alsologtostderr
E0127 13:23:28.672525  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:23:56.373926  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:25:34.434489  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-523095 -v=7 --alsologtostderr: (4m33.721869987s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-523095 --wait=true -v=7 --alsologtostderr
E0127 13:26:57.503166  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:28.673071  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-523095 --wait=true -v=7 --alsologtostderr: (2m48.915747291s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-523095
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (442.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-523095 node delete m03 -v=7 --alsologtostderr: (17.264285759s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 stop -v=7 --alsologtostderr
E0127 13:30:34.434572  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:28.672726  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-523095 stop -v=7 --alsologtostderr: (4m32.381121289s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr: exit status 7 (110.451123ms)

                                                
                                                
-- stdout --
	ha-523095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-523095-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:33:58.374235  580677 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:33:58.374353  580677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:33:58.374364  580677 out.go:358] Setting ErrFile to fd 2...
	I0127 13:33:58.374371  580677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:33:58.374539  580677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:33:58.374724  580677 out.go:352] Setting JSON to false
	I0127 13:33:58.374764  580677 mustload.go:65] Loading cluster: ha-523095
	I0127 13:33:58.374872  580677 notify.go:220] Checking for updates...
	I0127 13:33:58.375224  580677 config.go:182] Loaded profile config "ha-523095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:33:58.375251  580677 status.go:174] checking status of ha-523095 ...
	I0127 13:33:58.375657  580677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:58.375706  580677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:58.399381  580677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35717
	I0127 13:33:58.399865  580677 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:58.400452  580677 main.go:141] libmachine: Using API Version  1
	I0127 13:33:58.400477  580677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:58.400855  580677 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:58.401053  580677 main.go:141] libmachine: (ha-523095) Calling .GetState
	I0127 13:33:58.402619  580677 status.go:371] ha-523095 host status = "Stopped" (err=<nil>)
	I0127 13:33:58.402634  580677 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:58.402639  580677 status.go:176] ha-523095 status: &{Name:ha-523095 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:33:58.402653  580677 status.go:174] checking status of ha-523095-m02 ...
	I0127 13:33:58.402894  580677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:58.402925  580677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:58.416889  580677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0127 13:33:58.417244  580677 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:58.417687  580677 main.go:141] libmachine: Using API Version  1
	I0127 13:33:58.417710  580677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:58.418006  580677 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:58.418180  580677 main.go:141] libmachine: (ha-523095-m02) Calling .GetState
	I0127 13:33:58.419595  580677 status.go:371] ha-523095-m02 host status = "Stopped" (err=<nil>)
	I0127 13:33:58.419605  580677 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:58.419610  580677 status.go:176] ha-523095-m02 status: &{Name:ha-523095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:33:58.419653  580677 status.go:174] checking status of ha-523095-m04 ...
	I0127 13:33:58.419930  580677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:58.419966  580677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:58.433555  580677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0127 13:33:58.433927  580677 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:58.434322  580677 main.go:141] libmachine: Using API Version  1
	I0127 13:33:58.434340  580677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:58.434604  580677 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:58.434769  580677 main.go:141] libmachine: (ha-523095-m04) Calling .GetState
	I0127 13:33:58.436186  580677 status.go:371] ha-523095-m04 host status = "Stopped" (err=<nil>)
	I0127 13:33:58.436198  580677 status.go:384] host is not running, skipping remaining checks
	I0127 13:33:58.436204  580677 status.go:176] ha-523095-m04 status: &{Name:ha-523095-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-523095 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 13:34:51.735338  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:35:34.434899  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-523095 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.436390392s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-523095 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-523095 --control-plane -v=7 --alsologtostderr: (1m14.632292371s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-523095 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-379556 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0127 13:38:28.676433  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-379556 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.87822795s)
--- PASS: TestJSONOutput/start/Command (83.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-379556 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-379556 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-379556 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-379556 --output=json --user=testUser: (7.359551653s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-477406 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-477406 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.390313ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a39028d-2f6b-4fa2-b271-4761bfc75e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-477406] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"733efae9-42db-4367-a6bc-0f44424a153e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20327"}}
	{"specversion":"1.0","id":"40863764-1ac3-4d67-9ecd-dfe22a6a84c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"51f2d325-5cb5-4d0e-9112-92bdf53f4674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig"}}
	{"specversion":"1.0","id":"73dc299b-f1f1-4ad1-8577-b40d2bc3b22f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube"}}
	{"specversion":"1.0","id":"196b4e8f-4f6f-4e19-8d89-aff68fc5be05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e3b07d7c-d0e4-42c6-88e2-3803ee096e6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"894caebf-bbb1-407c-bf8f-923d6e96aee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-477406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-477406
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-457504 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-457504 --driver=kvm2  --container-runtime=crio: (41.366717358s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-470104 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-470104 --driver=kvm2  --container-runtime=crio: (44.428485968s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-457504
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-470104
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-470104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-470104
helpers_test.go:175: Cleaning up "first-457504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-457504
--- PASS: TestMinikubeProfile (88.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-085340 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0127 13:40:34.437791  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-085340 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.469312456s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085340 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085340 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-111428 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-111428 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.375034309s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-085340 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-111428
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-111428: (1.270108387s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-111428
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-111428: (20.705509832s)
--- PASS: TestMountStart/serial/RestartStopped (21.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-111428 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-268241 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 13:43:28.673074  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:43:37.505114  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-268241 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.833414611s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-268241 -- rollout status deployment/busybox: (3.11455659s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-fcgg6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-vbhnr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-fcgg6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-vbhnr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-fcgg6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-vbhnr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-fcgg6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-fcgg6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-vbhnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-268241 -- exec busybox-58667487b6-vbhnr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-268241 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-268241 -v 3 --alsologtostderr: (51.154449547s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-268241 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp testdata/cp-test.txt multinode-268241:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324452713/001/cp-test_multinode-268241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241:/home/docker/cp-test.txt multinode-268241-m02:/home/docker/cp-test_multinode-268241_multinode-268241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test_multinode-268241_multinode-268241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241:/home/docker/cp-test.txt multinode-268241-m03:/home/docker/cp-test_multinode-268241_multinode-268241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test_multinode-268241_multinode-268241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp testdata/cp-test.txt multinode-268241-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324452713/001/cp-test_multinode-268241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m02:/home/docker/cp-test.txt multinode-268241:/home/docker/cp-test_multinode-268241-m02_multinode-268241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test_multinode-268241-m02_multinode-268241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m02:/home/docker/cp-test.txt multinode-268241-m03:/home/docker/cp-test_multinode-268241-m02_multinode-268241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test_multinode-268241-m02_multinode-268241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp testdata/cp-test.txt multinode-268241-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324452713/001/cp-test_multinode-268241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m03:/home/docker/cp-test.txt multinode-268241:/home/docker/cp-test_multinode-268241-m03_multinode-268241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241 "sudo cat /home/docker/cp-test_multinode-268241-m03_multinode-268241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 cp multinode-268241-m03:/home/docker/cp-test.txt multinode-268241-m02:/home/docker/cp-test_multinode-268241-m03_multinode-268241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 ssh -n multinode-268241-m02 "sudo cat /home/docker/cp-test_multinode-268241-m03_multinode-268241-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-268241 node stop m03: (1.409030081s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-268241 status: exit status 7 (400.433123ms)

                                                
                                                
-- stdout --
	multinode-268241
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-268241-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-268241-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr: exit status 7 (419.828961ms)

                                                
                                                
-- stdout --
	multinode-268241
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-268241-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-268241-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:44:51.692982  588162 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:44:51.693223  588162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:44:51.693233  588162 out.go:358] Setting ErrFile to fd 2...
	I0127 13:44:51.693240  588162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:44:51.693405  588162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:44:51.693608  588162 out.go:352] Setting JSON to false
	I0127 13:44:51.693648  588162 mustload.go:65] Loading cluster: multinode-268241
	I0127 13:44:51.693759  588162 notify.go:220] Checking for updates...
	I0127 13:44:51.694085  588162 config.go:182] Loaded profile config "multinode-268241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:44:51.694110  588162 status.go:174] checking status of multinode-268241 ...
	I0127 13:44:51.694619  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.694672  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.710320  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0127 13:44:51.710798  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.711415  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.711436  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.711880  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.712141  588162 main.go:141] libmachine: (multinode-268241) Calling .GetState
	I0127 13:44:51.713766  588162 status.go:371] multinode-268241 host status = "Running" (err=<nil>)
	I0127 13:44:51.713782  588162 host.go:66] Checking if "multinode-268241" exists ...
	I0127 13:44:51.714081  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.714127  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.728819  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0127 13:44:51.729200  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.729679  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.729701  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.730004  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.730170  588162 main.go:141] libmachine: (multinode-268241) Calling .GetIP
	I0127 13:44:51.733694  588162 main.go:141] libmachine: (multinode-268241) DBG | domain multinode-268241 has defined MAC address 52:54:00:24:48:b1 in network mk-multinode-268241
	I0127 13:44:51.734136  588162 main.go:141] libmachine: (multinode-268241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:48:b1", ip: ""} in network mk-multinode-268241: {Iface:virbr1 ExpiryTime:2025-01-27 14:42:00 +0000 UTC Type:0 Mac:52:54:00:24:48:b1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-268241 Clientid:01:52:54:00:24:48:b1}
	I0127 13:44:51.734159  588162 main.go:141] libmachine: (multinode-268241) DBG | domain multinode-268241 has defined IP address 192.168.39.205 and MAC address 52:54:00:24:48:b1 in network mk-multinode-268241
	I0127 13:44:51.734301  588162 host.go:66] Checking if "multinode-268241" exists ...
	I0127 13:44:51.734669  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.734712  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.749358  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0127 13:44:51.749659  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.750079  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.750101  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.750461  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.750671  588162 main.go:141] libmachine: (multinode-268241) Calling .DriverName
	I0127 13:44:51.750830  588162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:44:51.750868  588162 main.go:141] libmachine: (multinode-268241) Calling .GetSSHHostname
	I0127 13:44:51.753254  588162 main.go:141] libmachine: (multinode-268241) DBG | domain multinode-268241 has defined MAC address 52:54:00:24:48:b1 in network mk-multinode-268241
	I0127 13:44:51.753679  588162 main.go:141] libmachine: (multinode-268241) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:48:b1", ip: ""} in network mk-multinode-268241: {Iface:virbr1 ExpiryTime:2025-01-27 14:42:00 +0000 UTC Type:0 Mac:52:54:00:24:48:b1 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-268241 Clientid:01:52:54:00:24:48:b1}
	I0127 13:44:51.753705  588162 main.go:141] libmachine: (multinode-268241) DBG | domain multinode-268241 has defined IP address 192.168.39.205 and MAC address 52:54:00:24:48:b1 in network mk-multinode-268241
	I0127 13:44:51.753790  588162 main.go:141] libmachine: (multinode-268241) Calling .GetSSHPort
	I0127 13:44:51.753990  588162 main.go:141] libmachine: (multinode-268241) Calling .GetSSHKeyPath
	I0127 13:44:51.754140  588162 main.go:141] libmachine: (multinode-268241) Calling .GetSSHUsername
	I0127 13:44:51.754255  588162 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/multinode-268241/id_rsa Username:docker}
	I0127 13:44:51.834393  588162 ssh_runner.go:195] Run: systemctl --version
	I0127 13:44:51.846194  588162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:44:51.860118  588162 kubeconfig.go:125] found "multinode-268241" server: "https://192.168.39.205:8443"
	I0127 13:44:51.860149  588162 api_server.go:166] Checking apiserver status ...
	I0127 13:44:51.860185  588162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:44:51.873705  588162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0127 13:44:51.882895  588162 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:44:51.882926  588162 ssh_runner.go:195] Run: ls
	I0127 13:44:51.886931  588162 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0127 13:44:51.892300  588162 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0127 13:44:51.892317  588162 status.go:463] multinode-268241 apiserver status = Running (err=<nil>)
	I0127 13:44:51.892325  588162 status.go:176] multinode-268241 status: &{Name:multinode-268241 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:44:51.892353  588162 status.go:174] checking status of multinode-268241-m02 ...
	I0127 13:44:51.892618  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.892650  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.909234  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0127 13:44:51.909689  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.910174  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.910194  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.910571  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.910771  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetState
	I0127 13:44:51.912190  588162 status.go:371] multinode-268241-m02 host status = "Running" (err=<nil>)
	I0127 13:44:51.912209  588162 host.go:66] Checking if "multinode-268241-m02" exists ...
	I0127 13:44:51.912497  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.912538  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.927461  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0127 13:44:51.927798  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.928248  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.928270  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.928606  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.928815  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetIP
	I0127 13:44:51.931296  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | domain multinode-268241-m02 has defined MAC address 52:54:00:2b:57:a4 in network mk-multinode-268241
	I0127 13:44:51.931749  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:57:a4", ip: ""} in network mk-multinode-268241: {Iface:virbr1 ExpiryTime:2025-01-27 14:43:04 +0000 UTC Type:0 Mac:52:54:00:2b:57:a4 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-268241-m02 Clientid:01:52:54:00:2b:57:a4}
	I0127 13:44:51.931783  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | domain multinode-268241-m02 has defined IP address 192.168.39.180 and MAC address 52:54:00:2b:57:a4 in network mk-multinode-268241
	I0127 13:44:51.931886  588162 host.go:66] Checking if "multinode-268241-m02" exists ...
	I0127 13:44:51.932202  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:51.932236  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:51.947255  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I0127 13:44:51.947561  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:51.947994  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:51.948016  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:51.948299  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:51.948493  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .DriverName
	I0127 13:44:51.948685  588162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 13:44:51.948720  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetSSHHostname
	I0127 13:44:51.951399  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | domain multinode-268241-m02 has defined MAC address 52:54:00:2b:57:a4 in network mk-multinode-268241
	I0127 13:44:51.951818  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:57:a4", ip: ""} in network mk-multinode-268241: {Iface:virbr1 ExpiryTime:2025-01-27 14:43:04 +0000 UTC Type:0 Mac:52:54:00:2b:57:a4 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-268241-m02 Clientid:01:52:54:00:2b:57:a4}
	I0127 13:44:51.951845  588162 main.go:141] libmachine: (multinode-268241-m02) DBG | domain multinode-268241-m02 has defined IP address 192.168.39.180 and MAC address 52:54:00:2b:57:a4 in network mk-multinode-268241
	I0127 13:44:51.951993  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetSSHPort
	I0127 13:44:51.952188  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetSSHKeyPath
	I0127 13:44:51.952385  588162 main.go:141] libmachine: (multinode-268241-m02) Calling .GetSSHUsername
	I0127 13:44:51.952533  588162 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20327-555419/.minikube/machines/multinode-268241-m02/id_rsa Username:docker}
	I0127 13:44:52.029108  588162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:44:52.044473  588162 status.go:176] multinode-268241-m02 status: &{Name:multinode-268241-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:44:52.044513  588162 status.go:174] checking status of multinode-268241-m03 ...
	I0127 13:44:52.044832  588162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:44:52.044869  588162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:44:52.061078  588162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I0127 13:44:52.061528  588162 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:44:52.061993  588162 main.go:141] libmachine: Using API Version  1
	I0127 13:44:52.062015  588162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:44:52.062424  588162 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:44:52.062634  588162 main.go:141] libmachine: (multinode-268241-m03) Calling .GetState
	I0127 13:44:52.064215  588162 status.go:371] multinode-268241-m03 host status = "Stopped" (err=<nil>)
	I0127 13:44:52.064233  588162 status.go:384] host is not running, skipping remaining checks
	I0127 13:44:52.064241  588162 status.go:176] multinode-268241-m03 status: &{Name:multinode-268241-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-268241 node start m03 -v=7 --alsologtostderr: (37.068759244s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (335.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-268241
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-268241
E0127 13:45:34.436857  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:48:28.676168  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-268241: (3m2.934796061s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-268241 --wait=true -v=8 --alsologtostderr
E0127 13:50:34.434944  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-268241 --wait=true -v=8 --alsologtostderr: (2m32.956939189s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-268241
--- PASS: TestMultiNode/serial/RestartKeepsNodes (335.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-268241 node delete m03: (2.039124179s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 stop
E0127 13:51:31.737731  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:53:28.676031  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-268241 stop: (3m1.205719026s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-268241 status: exit status 7 (88.848301ms)

                                                
                                                
-- stdout --
	multinode-268241
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-268241-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr: exit status 7 (84.098031ms)

                                                
                                                
-- stdout --
	multinode-268241
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-268241-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:54:09.619953  591073 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:54:09.620056  591073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:54:09.620065  591073 out.go:358] Setting ErrFile to fd 2...
	I0127 13:54:09.620069  591073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:54:09.620257  591073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 13:54:09.620431  591073 out.go:352] Setting JSON to false
	I0127 13:54:09.620467  591073 mustload.go:65] Loading cluster: multinode-268241
	I0127 13:54:09.620557  591073 notify.go:220] Checking for updates...
	I0127 13:54:09.620833  591073 config.go:182] Loaded profile config "multinode-268241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:54:09.620855  591073 status.go:174] checking status of multinode-268241 ...
	I0127 13:54:09.621274  591073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:54:09.621313  591073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:54:09.636275  591073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0127 13:54:09.636733  591073 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:54:09.637377  591073 main.go:141] libmachine: Using API Version  1
	I0127 13:54:09.637406  591073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:54:09.637754  591073 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:54:09.637939  591073 main.go:141] libmachine: (multinode-268241) Calling .GetState
	I0127 13:54:09.639708  591073 status.go:371] multinode-268241 host status = "Stopped" (err=<nil>)
	I0127 13:54:09.639723  591073 status.go:384] host is not running, skipping remaining checks
	I0127 13:54:09.639730  591073 status.go:176] multinode-268241 status: &{Name:multinode-268241 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:54:09.639765  591073 status.go:174] checking status of multinode-268241-m02 ...
	I0127 13:54:09.640049  591073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:54:09.640099  591073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:54:09.654240  591073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0127 13:54:09.654568  591073 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:54:09.654973  591073 main.go:141] libmachine: Using API Version  1
	I0127 13:54:09.654995  591073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:54:09.655259  591073 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:54:09.655449  591073 main.go:141] libmachine: (multinode-268241-m02) Calling .GetState
	I0127 13:54:09.656730  591073 status.go:371] multinode-268241-m02 host status = "Stopped" (err=<nil>)
	I0127 13:54:09.656743  591073 status.go:384] host is not running, skipping remaining checks
	I0127 13:54:09.656749  591073 status.go:176] multinode-268241-m02 status: &{Name:multinode-268241-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-268241 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 13:55:34.434853  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-268241 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.318716805s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-268241 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-268241
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-268241-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-268241-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.48008ms)

                                                
                                                
-- stdout --
	* [multinode-268241-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-268241-m02' is duplicated with machine name 'multinode-268241-m02' in profile 'multinode-268241'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-268241-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-268241-m03 --driver=kvm2  --container-runtime=crio: (43.740967624s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-268241
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-268241: exit status 80 (213.667369ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-268241 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-268241-m03 already exists in multinode-268241-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-268241-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.73s)

                                                
                                    
x
+
TestScheduledStopUnix (113.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-325126 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-325126 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.359389894s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-325126 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-325126 -n scheduled-stop-325126
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-325126 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 14:00:14.099999  562636 retry.go:31] will retry after 71.244µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.101176  562636 retry.go:31] will retry after 201.464µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.102314  562636 retry.go:31] will retry after 140.844µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.103450  562636 retry.go:31] will retry after 305.785µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.104590  562636 retry.go:31] will retry after 617.568µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.105715  562636 retry.go:31] will retry after 636.393µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.106850  562636 retry.go:31] will retry after 630.554µs: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.108000  562636 retry.go:31] will retry after 1.090155ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.110203  562636 retry.go:31] will retry after 1.702206ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.112403  562636 retry.go:31] will retry after 2.670864ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.115651  562636 retry.go:31] will retry after 8.011129ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.123888  562636 retry.go:31] will retry after 9.044148ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.133024  562636 retry.go:31] will retry after 17.729263ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.151249  562636 retry.go:31] will retry after 20.787418ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
I0127 14:00:14.172475  562636 retry.go:31] will retry after 17.196282ms: open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/scheduled-stop-325126/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-325126 --cancel-scheduled
E0127 14:00:17.509342  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:00:34.439066  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-325126 -n scheduled-stop-325126
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-325126
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-325126 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-325126
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-325126: exit status 7 (65.744704ms)

                                                
                                                
-- stdout --
	scheduled-stop-325126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-325126 -n scheduled-stop-325126
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-325126 -n scheduled-stop-325126: exit status 7 (63.690622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-325126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-325126
--- PASS: TestScheduledStopUnix (113.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (214.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1905552487 start -p running-upgrade-435002 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1905552487 start -p running-upgrade-435002 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.090351424s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-435002 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-435002 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.936318213s)
helpers_test.go:175: Cleaning up "running-upgrade-435002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-435002
--- PASS: TestRunningBinaryUpgrade (214.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.361316ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-412983] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-412983 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-412983 --driver=kvm2  --container-runtime=crio: (1m29.628672923s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-412983 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0127 14:03:28.673228  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.675101641s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-412983 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-412983 status -o json: exit status 2 (248.125037ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-412983","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-412983
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-412983 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.040940066s)
--- PASS: TestNoKubernetes/serial/Start (47.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-412983 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-412983 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.739462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (12.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (11.280690726s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (12.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-412983
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-412983: (1.287037085s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (29.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-412983 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-412983 --driver=kvm2  --container-runtime=crio: (29.224955182s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (29.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-418372 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-418372 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (97.546904ms)

                                                
                                                
-- stdout --
	* [false-418372] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20327
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:05:02.642950  598559 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:02.643237  598559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:02.643247  598559 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:02.643252  598559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:02.643436  598559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20327-555419/.minikube/bin
	I0127 14:05:02.644006  598559 out.go:352] Setting JSON to false
	I0127 14:05:02.644943  598559 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17248,"bootTime":1737969455,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:02.645025  598559 start.go:139] virtualization: kvm guest
	I0127 14:05:02.646647  598559 out.go:177] * [false-418372] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:05:02.647779  598559 out.go:177]   - MINIKUBE_LOCATION=20327
	I0127 14:05:02.647786  598559 notify.go:220] Checking for updates...
	I0127 14:05:02.650050  598559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:02.651328  598559 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20327-555419/kubeconfig
	I0127 14:05:02.652426  598559 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20327-555419/.minikube
	I0127 14:05:02.653472  598559 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:05:02.654484  598559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:05:02.655917  598559 config.go:182] Loaded profile config "NoKubernetes-412983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 14:05:02.656043  598559 config.go:182] Loaded profile config "cert-expiration-335486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:05:02.656134  598559 config.go:182] Loaded profile config "kubernetes-upgrade-225004": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 14:05:02.656233  598559 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:02.688077  598559 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:05:02.689057  598559 start.go:297] selected driver: kvm2
	I0127 14:05:02.689068  598559 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:02.689078  598559 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:05:02.690732  598559 out.go:201] 
	W0127 14:05:02.691795  598559 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 14:05:02.692763  598559 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-418372 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: cert-expiration-335486
contexts:
- context:
cluster: cert-expiration-335486
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-335486
name: cert-expiration-335486
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-335486
user:
client-certificate: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.crt
client-key: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-418372

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418372"

                                                
                                                
----------------------- debugLogs end: false-418372 [took: 2.602747906s] --------------------------------
helpers_test.go:175: Cleaning up "false-418372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-418372
--- PASS: TestNetworkPlugins/group/false (2.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2676687293 start -p stopped-upgrade-736772 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2676687293 start -p stopped-upgrade-736772 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m13.183040915s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2676687293 -p stopped-upgrade-736772 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2676687293 -p stopped-upgrade-736772 stop: (2.158082682s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-736772 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-736772 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.02838641s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-412983 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-412983 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.052863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (108.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-966446 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0127 14:05:34.437054  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-966446 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.454983875s)
--- PASS: TestPause/serial/Start (108.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-736772
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-183205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:08:11.739336  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-183205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m47.246110956s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:08:28.672621  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-742142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (54.270640833s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-742142 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [76125151-29ae-4fe2-b574-16025aa0c8ab] Pending
helpers_test.go:344: "busybox" [76125151-29ae-4fe2-b574-16025aa0c8ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [76125151-29ae-4fe2-b574-16025aa0c8ab] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004390011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-742142 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-183205 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ad1bc57-8d82-4855-84bd-79b076e6d206] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ad1bc57-8d82-4855-84bd-79b076e6d206] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004972488s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-183205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-742142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-742142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-742142 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-742142 --alsologtostderr -v=3: (1m30.888644276s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-183205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-183205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-183205 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-183205 --alsologtostderr -v=3: (1m30.94119046s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742142 -n embed-certs-742142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742142 -n embed-certs-742142: exit status 7 (67.539956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-742142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183205 -n no-preload-183205
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183205 -n no-preload-183205: exit status 7 (76.026898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-183205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-183205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-183205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m29.139652102s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-183205 -n no-preload-183205
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-456130 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-456130 --alsologtostderr -v=3: (1.337802683s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-456130 -n old-k8s-version-456130: exit status 7 (75.84419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-456130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zghjk" [13d7b267-bc86-4d9c-85bd-8391135ff9e5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004362232s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zghjk" [13d7b267-bc86-4d9c-85bd-8391135ff9e5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00399627s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-183205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-183205 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-183205 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183205 -n no-preload-183205
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183205 -n no-preload-183205: exit status 2 (239.499908ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183205 -n no-preload-183205
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183205 -n no-preload-183205: exit status 2 (236.225337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-183205 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-183205 -n no-preload-183205
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-183205 -n no-preload-183205
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-379305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:16:57.511608  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-379305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (54.239823913s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-379305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-379305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.168989062s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-379305 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-379305 --alsologtostderr -v=3: (7.321392762s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-379305 -n newest-cni-379305
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-379305 -n newest-cni-379305: exit status 7 (68.183139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-379305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (71.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-379305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:18:28.673200  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/functional-104449/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-379305 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m10.816028127s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-379305 -n newest-cni-379305
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (71.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-379305 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-379305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-379305 -n newest-cni-379305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-379305 -n newest-cni-379305: exit status 2 (234.426434ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-379305 -n newest-cni-379305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-379305 -n newest-cni-379305: exit status 2 (235.846874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-379305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-379305 -n newest-cni-379305
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-379305 -n newest-cni-379305
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-178758 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 14:19:27.336406  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.342788  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.354090  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.375415  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.416781  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.498176  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.659612  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:27.981256  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:28.623278  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:29.905446  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:32.468332  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:37.590426  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:19:47.832775  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:08.314958  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:34.434788  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/addons-293977/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-178758 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m27.913899963s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-178758 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b6a752d-9c70-45d4-a3a6-ceb80ce7d391] Pending
helpers_test.go:344: "busybox" [5b6a752d-9c70-45d4-a3a6-ceb80ce7d391] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004937254s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-178758 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-178758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-178758 describe deploy/metrics-server -n kube-system
E0127 14:20:49.276937  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/no-preload-183205/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-178758 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-178758 --alsologtostderr -v=3: (1m30.878460413s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758: exit status 7 (75.985083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-178758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-178758 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-178758 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (4m59.750793388s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (54.071769157s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-418372 "pgrep -a kubelet"
I0127 14:24:44.215804  562636 config.go:182] Loaded profile config "auto-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jsd97" [8cb84b02-af14-41db-bd3e-dc10b730abf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jsd97" [8cb84b02-af14-41db-bd3e-dc10b730abf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003636841s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m2.376492497s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7wqk6" [9268106e-7e6d-4d0d-bf15-a0b56389608d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004325555s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-418372 "pgrep -a kubelet"
I0127 14:26:19.203052  562636 config.go:182] Loaded profile config "kindnet-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j69js" [41641765-0e49-4889-8022-faad99482e34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-j69js" [41641765-0e49-4889-8022-faad99482e34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004061579s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m16.405259483s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mndst" [82648ef9-c304-48c3-aaf9-5b81399d0d73] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004985565s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mndst" [82648ef9-c304-48c3-aaf9-5b81399d0d73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003641682s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-178758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-178758 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-178758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758: exit status 2 (250.119449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758: exit status 2 (257.130925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-178758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-178758 -n default-k8s-diff-port-178758
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m6.787317434s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hgvsq" [6ba0bade-2934-4f17-a3eb-346f137e1d0d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005754848s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-418372 "pgrep -a kubelet"
I0127 14:28:07.190585  562636 config.go:182] Loaded profile config "calico-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bdg87" [482ec823-4d5c-4fd7-a6e5-d8c58762a5ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bdg87" [482ec823-4d5c-4fd7-a6e5-d8c58762a5ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006903576s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.309354606s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-418372 "pgrep -a kubelet"
I0127 14:28:43.051855  562636 config.go:182] Loaded profile config "custom-flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-s9rlg" [225c4282-5481-418b-ac06-4671812631c5] Pending
helpers_test.go:344: "netcat-5d86dc444-s9rlg" [225c4282-5481-418b-ac06-4671812631c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-s9rlg" [225c4282-5481-418b-ac06-4671812631c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.002707568s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.050131536s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-418372 "pgrep -a kubelet"
I0127 14:29:32.464129  562636 config.go:182] Loaded profile config "enable-default-cni-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xndd4" [4850b4ee-c6b9-400a-8c7b-7138f62f1b18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xndd4" [4850b4ee-c6b9-400a-8c7b-7138f62f1b18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004282921s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-418372 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.529534181s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6sccm" [ff9736c3-ee40-44bc-b74a-c8c0a50aaf63] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00351032s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-418372 "pgrep -a kubelet"
I0127 14:30:43.671372  562636 config.go:182] Loaded profile config "flannel-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-q7vwv" [ccaf10b8-4224-4ab7-8c05-67f14f4bed9d] Pending
helpers_test.go:344: "netcat-5d86dc444-q7vwv" [ccaf10b8-4224-4ab7-8c05-67f14f4bed9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-q7vwv" [ccaf10b8-4224-4ab7-8c05-67f14f4bed9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00511118s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-418372 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I0127 14:30:55.132288  562636 config.go:182] Loaded profile config "bridge-418372": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-418372 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-r59xq" [d8b86f1b-583d-414a-b735-1c1270c16507] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-r59xq" [d8b86f1b-583d-414a-b735-1c1270c16507] Running
E0127 14:30:59.684428  562636 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/default-k8s-diff-port-178758/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003927561s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-418372 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-418372 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.14
269 TestNetworkPlugins/group/kubenet 2.82
278 TestNetworkPlugins/group/cilium 3.05
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-293977 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-650791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-650791
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-418372 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: cert-expiration-335486
contexts:
- context:
cluster: cert-expiration-335486
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-335486
name: cert-expiration-335486
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-335486
user:
client-certificate: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.crt
client-key: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-418372

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418372"

                                                
                                                
----------------------- debugLogs end: kubenet-418372 [took: 2.679715501s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-418372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-418372
--- SKIP: TestNetworkPlugins/group/kubenet (2.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-418372 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-418372" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20327-555419/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.237:8443
name: cert-expiration-335486
contexts:
- context:
cluster: cert-expiration-335486
extensions:
- extension:
last-update: Mon, 27 Jan 2025 14:04:05 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-335486
name: cert-expiration-335486
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-335486
user:
client-certificate: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.crt
client-key: /home/jenkins/minikube-integration/20327-555419/.minikube/profiles/cert-expiration-335486/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-418372

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-418372" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418372"

                                                
                                                
----------------------- debugLogs end: cilium-418372 [took: 2.915748942s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-418372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-418372
--- SKIP: TestNetworkPlugins/group/cilium (3.05s)

                                                
                                    
Copied to clipboard