Test Report: KVM_Linux_crio 20539

                    
                      404431ee24582bacb75d7cfbedbe3aa3f9ffc1a2:2025-03-17:38754
                    
                

Test fail (10/322)

x
+
TestAddons/parallel/Ingress (151.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-012915 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-012915 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-012915 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8de7ed01-2923-4e6d-8d79-73b590e77823] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8de7ed01-2923-4e6d-8d79-73b590e77823] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003753409s
I0317 12:44:47.246814  629188 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-012915 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.743909845s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-012915 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.84
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-012915 -n addons-012915
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 logs -n 25: (1.129064349s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-534794                                                                     | download-only-534794 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| delete  | -p download-only-793997                                                                     | download-only-793997 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| delete  | -p download-only-534794                                                                     | download-only-534794 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-652834 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |                     |
	|         | binary-mirror-652834                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41719                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-652834                                                                     | binary-mirror-652834 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |                     |
	|         | addons-012915                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |                     |
	|         | addons-012915                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-012915 --wait=true                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:44 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-012915 ssh cat                                                                       | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | /opt/local-path-provisioner/pvc-dfd0802a-c635-46e0-a42e-5cc628c5aa4b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | -p addons-012915                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-012915 ip                                                                            | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-012915 addons disable                                                                | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-012915 ssh curl -s                                                                   | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-012915 addons                                                                        | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:45 UTC | 17 Mar 25 12:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-012915 ip                                                                            | addons-012915        | jenkins | v1.35.0 | 17 Mar 25 12:46 UTC | 17 Mar 25 12:46 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:41:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:41:35.813646  629808 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:41:35.813855  629808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:35.813863  629808 out.go:358] Setting ErrFile to fd 2...
	I0317 12:41:35.813867  629808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:35.814035  629808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 12:41:35.815145  629808 out.go:352] Setting JSON to false
	I0317 12:41:35.816455  629808 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8640,"bootTime":1742206656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:41:35.816546  629808 start.go:139] virtualization: kvm guest
	I0317 12:41:35.818208  629808 out.go:177] * [addons-012915] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:41:35.819935  629808 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:41:35.819956  629808 notify.go:220] Checking for updates...
	I0317 12:41:35.822132  629808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:41:35.823194  629808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:41:35.824478  629808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:41:35.825562  629808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:41:35.826614  629808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:41:35.827758  629808 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:41:35.858658  629808 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 12:41:35.859786  629808 start.go:297] selected driver: kvm2
	I0317 12:41:35.859802  629808 start.go:901] validating driver "kvm2" against <nil>
	I0317 12:41:35.859822  629808 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:41:35.860719  629808 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:41:35.860816  629808 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 12:41:35.876109  629808 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 12:41:35.876173  629808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:41:35.876434  629808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:41:35.876472  629808 cni.go:84] Creating CNI manager for ""
	I0317 12:41:35.876526  629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 12:41:35.876536  629808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 12:41:35.876587  629808 start.go:340] cluster config:
	{Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:41:35.876691  629808 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:41:35.878380  629808 out.go:177] * Starting "addons-012915" primary control-plane node in "addons-012915" cluster
	I0317 12:41:35.879518  629808 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 12:41:35.879588  629808 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 12:41:35.879602  629808 cache.go:56] Caching tarball of preloaded images
	I0317 12:41:35.880150  629808 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 12:41:35.880183  629808 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 12:41:35.880991  629808 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json ...
	I0317 12:41:35.881026  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json: {Name:mk1005f934882c41acab1ea5c234ee630faed466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:41:35.881331  629808 start.go:360] acquireMachinesLock for addons-012915: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 12:41:35.881758  629808 start.go:364] duration metric: took 387.934µs to acquireMachinesLock for "addons-012915"
	I0317 12:41:35.881790  629808 start.go:93] Provisioning new machine with config: &{Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 12:41:35.881853  629808 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 12:41:35.883346  629808 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0317 12:41:35.883520  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:41:35.883658  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:41:35.897979  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0317 12:41:35.898441  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:41:35.898962  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:41:35.898990  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:41:35.899318  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:41:35.899513  629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
	I0317 12:41:35.899678  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:41:35.899823  629808 start.go:159] libmachine.API.Create for "addons-012915" (driver="kvm2")
	I0317 12:41:35.899860  629808 client.go:168] LocalClient.Create starting
	I0317 12:41:35.899902  629808 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 12:41:36.032339  629808 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 12:41:36.670062  629808 main.go:141] libmachine: Running pre-create checks...
	I0317 12:41:36.670095  629808 main.go:141] libmachine: (addons-012915) Calling .PreCreateCheck
	I0317 12:41:36.670614  629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
	I0317 12:41:36.671069  629808 main.go:141] libmachine: Creating machine...
	I0317 12:41:36.671086  629808 main.go:141] libmachine: (addons-012915) Calling .Create
	I0317 12:41:36.671309  629808 main.go:141] libmachine: (addons-012915) creating KVM machine...
	I0317 12:41:36.671332  629808 main.go:141] libmachine: (addons-012915) creating network...
	I0317 12:41:36.672732  629808 main.go:141] libmachine: (addons-012915) DBG | found existing default KVM network
	I0317 12:41:36.673529  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:36.673309  629830 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011ef20}
	I0317 12:41:36.673562  629808 main.go:141] libmachine: (addons-012915) DBG | created network xml: 
	I0317 12:41:36.673597  629808 main.go:141] libmachine: (addons-012915) DBG | <network>
	I0317 12:41:36.673613  629808 main.go:141] libmachine: (addons-012915) DBG |   <name>mk-addons-012915</name>
	I0317 12:41:36.673620  629808 main.go:141] libmachine: (addons-012915) DBG |   <dns enable='no'/>
	I0317 12:41:36.673627  629808 main.go:141] libmachine: (addons-012915) DBG |   
	I0317 12:41:36.673634  629808 main.go:141] libmachine: (addons-012915) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0317 12:41:36.673640  629808 main.go:141] libmachine: (addons-012915) DBG |     <dhcp>
	I0317 12:41:36.673646  629808 main.go:141] libmachine: (addons-012915) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0317 12:41:36.673653  629808 main.go:141] libmachine: (addons-012915) DBG |     </dhcp>
	I0317 12:41:36.673657  629808 main.go:141] libmachine: (addons-012915) DBG |   </ip>
	I0317 12:41:36.673663  629808 main.go:141] libmachine: (addons-012915) DBG |   
	I0317 12:41:36.673668  629808 main.go:141] libmachine: (addons-012915) DBG | </network>
	I0317 12:41:36.673678  629808 main.go:141] libmachine: (addons-012915) DBG | 
	I0317 12:41:36.678627  629808 main.go:141] libmachine: (addons-012915) DBG | trying to create private KVM network mk-addons-012915 192.168.39.0/24...
	I0317 12:41:36.743769  629808 main.go:141] libmachine: (addons-012915) DBG | private KVM network mk-addons-012915 192.168.39.0/24 created
	I0317 12:41:36.743808  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:36.743736  629830 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:41:36.743839  629808 main.go:141] libmachine: (addons-012915) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 ...
	I0317 12:41:36.743862  629808 main.go:141] libmachine: (addons-012915) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 12:41:36.743883  629808 main.go:141] libmachine: (addons-012915) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 12:41:37.007383  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.007238  629830 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa...
	I0317 12:41:37.122966  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.122814  629830 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/addons-012915.rawdisk...
	I0317 12:41:37.122998  629808 main.go:141] libmachine: (addons-012915) DBG | Writing magic tar header
	I0317 12:41:37.123011  629808 main.go:141] libmachine: (addons-012915) DBG | Writing SSH key tar header
	I0317 12:41:37.123022  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.122940  629830 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 ...
	I0317 12:41:37.123037  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915
	I0317 12:41:37.123059  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 (perms=drwx------)
	I0317 12:41:37.123075  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 12:41:37.123088  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 12:41:37.123099  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:41:37.123104  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 12:41:37.123111  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 12:41:37.123115  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins
	I0317 12:41:37.123122  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 12:41:37.123135  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 12:41:37.123144  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 12:41:37.123157  629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 12:41:37.123166  629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home
	I0317 12:41:37.123179  629808 main.go:141] libmachine: (addons-012915) DBG | skipping /home - not owner
	I0317 12:41:37.123189  629808 main.go:141] libmachine: (addons-012915) creating domain...
	I0317 12:41:37.124332  629808 main.go:141] libmachine: (addons-012915) define libvirt domain using xml: 
	I0317 12:41:37.124357  629808 main.go:141] libmachine: (addons-012915) <domain type='kvm'>
	I0317 12:41:37.124368  629808 main.go:141] libmachine: (addons-012915)   <name>addons-012915</name>
	I0317 12:41:37.124380  629808 main.go:141] libmachine: (addons-012915)   <memory unit='MiB'>4000</memory>
	I0317 12:41:37.124393  629808 main.go:141] libmachine: (addons-012915)   <vcpu>2</vcpu>
	I0317 12:41:37.124399  629808 main.go:141] libmachine: (addons-012915)   <features>
	I0317 12:41:37.124408  629808 main.go:141] libmachine: (addons-012915)     <acpi/>
	I0317 12:41:37.124417  629808 main.go:141] libmachine: (addons-012915)     <apic/>
	I0317 12:41:37.124425  629808 main.go:141] libmachine: (addons-012915)     <pae/>
	I0317 12:41:37.124432  629808 main.go:141] libmachine: (addons-012915)     
	I0317 12:41:37.124437  629808 main.go:141] libmachine: (addons-012915)   </features>
	I0317 12:41:37.124444  629808 main.go:141] libmachine: (addons-012915)   <cpu mode='host-passthrough'>
	I0317 12:41:37.124452  629808 main.go:141] libmachine: (addons-012915)   
	I0317 12:41:37.124457  629808 main.go:141] libmachine: (addons-012915)   </cpu>
	I0317 12:41:37.124512  629808 main.go:141] libmachine: (addons-012915)   <os>
	I0317 12:41:37.124545  629808 main.go:141] libmachine: (addons-012915)     <type>hvm</type>
	I0317 12:41:37.124560  629808 main.go:141] libmachine: (addons-012915)     <boot dev='cdrom'/>
	I0317 12:41:37.124570  629808 main.go:141] libmachine: (addons-012915)     <boot dev='hd'/>
	I0317 12:41:37.124581  629808 main.go:141] libmachine: (addons-012915)     <bootmenu enable='no'/>
	I0317 12:41:37.124590  629808 main.go:141] libmachine: (addons-012915)   </os>
	I0317 12:41:37.124599  629808 main.go:141] libmachine: (addons-012915)   <devices>
	I0317 12:41:37.124614  629808 main.go:141] libmachine: (addons-012915)     <disk type='file' device='cdrom'>
	I0317 12:41:37.124668  629808 main.go:141] libmachine: (addons-012915)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/boot2docker.iso'/>
	I0317 12:41:37.124692  629808 main.go:141] libmachine: (addons-012915)       <target dev='hdc' bus='scsi'/>
	I0317 12:41:37.124703  629808 main.go:141] libmachine: (addons-012915)       <readonly/>
	I0317 12:41:37.124714  629808 main.go:141] libmachine: (addons-012915)     </disk>
	I0317 12:41:37.124729  629808 main.go:141] libmachine: (addons-012915)     <disk type='file' device='disk'>
	I0317 12:41:37.124743  629808 main.go:141] libmachine: (addons-012915)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 12:41:37.124769  629808 main.go:141] libmachine: (addons-012915)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/addons-012915.rawdisk'/>
	I0317 12:41:37.124790  629808 main.go:141] libmachine: (addons-012915)       <target dev='hda' bus='virtio'/>
	I0317 12:41:37.124800  629808 main.go:141] libmachine: (addons-012915)     </disk>
	I0317 12:41:37.124814  629808 main.go:141] libmachine: (addons-012915)     <interface type='network'>
	I0317 12:41:37.124827  629808 main.go:141] libmachine: (addons-012915)       <source network='mk-addons-012915'/>
	I0317 12:41:37.124837  629808 main.go:141] libmachine: (addons-012915)       <model type='virtio'/>
	I0317 12:41:37.124845  629808 main.go:141] libmachine: (addons-012915)     </interface>
	I0317 12:41:37.124852  629808 main.go:141] libmachine: (addons-012915)     <interface type='network'>
	I0317 12:41:37.124857  629808 main.go:141] libmachine: (addons-012915)       <source network='default'/>
	I0317 12:41:37.124863  629808 main.go:141] libmachine: (addons-012915)       <model type='virtio'/>
	I0317 12:41:37.124869  629808 main.go:141] libmachine: (addons-012915)     </interface>
	I0317 12:41:37.124878  629808 main.go:141] libmachine: (addons-012915)     <serial type='pty'>
	I0317 12:41:37.124891  629808 main.go:141] libmachine: (addons-012915)       <target port='0'/>
	I0317 12:41:37.124904  629808 main.go:141] libmachine: (addons-012915)     </serial>
	I0317 12:41:37.124921  629808 main.go:141] libmachine: (addons-012915)     <console type='pty'>
	I0317 12:41:37.124939  629808 main.go:141] libmachine: (addons-012915)       <target type='serial' port='0'/>
	I0317 12:41:37.124949  629808 main.go:141] libmachine: (addons-012915)     </console>
	I0317 12:41:37.124956  629808 main.go:141] libmachine: (addons-012915)     <rng model='virtio'>
	I0317 12:41:37.124969  629808 main.go:141] libmachine: (addons-012915)       <backend model='random'>/dev/random</backend>
	I0317 12:41:37.124974  629808 main.go:141] libmachine: (addons-012915)     </rng>
	I0317 12:41:37.124981  629808 main.go:141] libmachine: (addons-012915)     
	I0317 12:41:37.124985  629808 main.go:141] libmachine: (addons-012915)     
	I0317 12:41:37.124990  629808 main.go:141] libmachine: (addons-012915)   </devices>
	I0317 12:41:37.124996  629808 main.go:141] libmachine: (addons-012915) </domain>
	I0317 12:41:37.125003  629808 main.go:141] libmachine: (addons-012915) 
	I0317 12:41:37.128694  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:88:e3:dc in network default
	I0317 12:41:37.129250  629808 main.go:141] libmachine: (addons-012915) starting domain...
	I0317 12:41:37.129270  629808 main.go:141] libmachine: (addons-012915) ensuring networks are active...
	I0317 12:41:37.129282  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:37.129925  629808 main.go:141] libmachine: (addons-012915) Ensuring network default is active
	I0317 12:41:37.130245  629808 main.go:141] libmachine: (addons-012915) Ensuring network mk-addons-012915 is active
	I0317 12:41:37.130755  629808 main.go:141] libmachine: (addons-012915) getting domain XML...
	I0317 12:41:37.131556  629808 main.go:141] libmachine: (addons-012915) creating domain...
	I0317 12:41:38.311165  629808 main.go:141] libmachine: (addons-012915) waiting for IP...
	I0317 12:41:38.311986  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:38.312393  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:38.312465  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.312395  629830 retry.go:31] will retry after 253.029131ms: waiting for domain to come up
	I0317 12:41:38.566539  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:38.566908  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:38.566934  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.566865  629830 retry.go:31] will retry after 239.315749ms: waiting for domain to come up
	I0317 12:41:38.808393  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:38.808821  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:38.808865  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.808771  629830 retry.go:31] will retry after 361.01477ms: waiting for domain to come up
	I0317 12:41:39.171325  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:39.171724  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:39.171793  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:39.171731  629830 retry.go:31] will retry after 460.672416ms: waiting for domain to come up
	I0317 12:41:39.634438  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:39.634848  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:39.634883  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:39.634808  629830 retry.go:31] will retry after 481.725022ms: waiting for domain to come up
	I0317 12:41:40.118658  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:40.119109  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:40.119136  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:40.119080  629830 retry.go:31] will retry after 928.899682ms: waiting for domain to come up
	I0317 12:41:41.049707  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:41.050130  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:41.050154  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:41.050110  629830 retry.go:31] will retry after 1.035009529s: waiting for domain to come up
	I0317 12:41:42.086478  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:42.086846  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:42.086878  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:42.086812  629830 retry.go:31] will retry after 1.159049516s: waiting for domain to come up
	I0317 12:41:43.248106  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:43.248441  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:43.248467  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:43.248408  629830 retry.go:31] will retry after 1.261706174s: waiting for domain to come up
	I0317 12:41:44.511845  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:44.512402  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:44.512431  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:44.512320  629830 retry.go:31] will retry after 1.687461831s: waiting for domain to come up
	I0317 12:41:46.201918  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:46.202361  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:46.202394  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:46.202317  629830 retry.go:31] will retry after 1.948915961s: waiting for domain to come up
	I0317 12:41:48.153380  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:48.153764  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:48.153791  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:48.153705  629830 retry.go:31] will retry after 2.589327367s: waiting for domain to come up
	I0317 12:41:50.746364  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:50.746758  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:50.746777  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:50.746730  629830 retry.go:31] will retry after 3.250724894s: waiting for domain to come up
	I0317 12:41:53.998634  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:53.999024  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
	I0317 12:41:53.999079  629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:53.999030  629830 retry.go:31] will retry after 4.576109972s: waiting for domain to come up
	I0317 12:41:58.576359  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.576753  629808 main.go:141] libmachine: (addons-012915) found domain IP: 192.168.39.84
	I0317 12:41:58.576782  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has current primary IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.576790  629808 main.go:141] libmachine: (addons-012915) reserving static IP address...
	I0317 12:41:58.577148  629808 main.go:141] libmachine: (addons-012915) DBG | unable to find host DHCP lease matching {name: "addons-012915", mac: "52:54:00:2b:05:f6", ip: "192.168.39.84"} in network mk-addons-012915
	I0317 12:41:58.651196  629808 main.go:141] libmachine: (addons-012915) reserved static IP address 192.168.39.84 for domain addons-012915
	I0317 12:41:58.651245  629808 main.go:141] libmachine: (addons-012915) DBG | Getting to WaitForSSH function...
	I0317 12:41:58.651255  629808 main.go:141] libmachine: (addons-012915) waiting for SSH...
	I0317 12:41:58.653916  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.654263  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:58.654304  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.654411  629808 main.go:141] libmachine: (addons-012915) DBG | Using SSH client type: external
	I0317 12:41:58.654449  629808 main.go:141] libmachine: (addons-012915) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa (-rw-------)
	I0317 12:41:58.654491  629808 main.go:141] libmachine: (addons-012915) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 12:41:58.654508  629808 main.go:141] libmachine: (addons-012915) DBG | About to run SSH command:
	I0317 12:41:58.654524  629808 main.go:141] libmachine: (addons-012915) DBG | exit 0
	I0317 12:41:58.779465  629808 main.go:141] libmachine: (addons-012915) DBG | SSH cmd err, output: <nil>: 
	I0317 12:41:58.779759  629808 main.go:141] libmachine: (addons-012915) KVM machine creation complete
	I0317 12:41:58.780047  629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
	I0317 12:41:58.780641  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:41:58.780854  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:41:58.781074  629808 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 12:41:58.781090  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:41:58.782308  629808 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 12:41:58.782327  629808 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 12:41:58.782334  629808 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 12:41:58.782343  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:58.784660  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.784988  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:58.785019  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.785151  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:58.785327  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:58.785512  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:58.785630  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:58.785798  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:41:58.786018  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:41:58.786030  629808 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 12:41:58.894557  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:41:58.894581  629808 main.go:141] libmachine: Detecting the provisioner...
	I0317 12:41:58.894590  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:58.897223  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.897595  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:58.897624  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:58.897751  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:58.897953  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:58.898118  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:58.898279  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:58.898474  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:41:58.898670  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:41:58.898680  629808 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 12:41:59.008106  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 12:41:59.008176  629808 main.go:141] libmachine: found compatible host: buildroot
	I0317 12:41:59.008189  629808 main.go:141] libmachine: Provisioning with buildroot...
	I0317 12:41:59.008200  629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
	I0317 12:41:59.008463  629808 buildroot.go:166] provisioning hostname "addons-012915"
	I0317 12:41:59.008492  629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
	I0317 12:41:59.008706  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:59.011455  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.011845  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.011879  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.012026  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:59.012226  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.012395  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.012542  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:59.012710  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:41:59.012964  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:41:59.012979  629808 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-012915 && echo "addons-012915" | sudo tee /etc/hostname
	I0317 12:41:59.132159  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012915
	
	I0317 12:41:59.132201  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:59.135100  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.135522  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.135559  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.135747  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:59.135948  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.136132  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.136247  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:59.136410  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:41:59.136634  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:41:59.136651  629808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:41:59.251490  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:41:59.251521  629808 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 12:41:59.251567  629808 buildroot.go:174] setting up certificates
	I0317 12:41:59.251584  629808 provision.go:84] configureAuth start
	I0317 12:41:59.251598  629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
	I0317 12:41:59.251863  629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
	I0317 12:41:59.254773  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.255076  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.255093  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.255247  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:59.257261  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.257560  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.257588  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.257722  629808 provision.go:143] copyHostCerts
	I0317 12:41:59.257790  629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 12:41:59.257902  629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 12:41:59.257959  629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 12:41:59.258007  629808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.addons-012915 san=[127.0.0.1 192.168.39.84 addons-012915 localhost minikube]
	I0317 12:41:59.717849  629808 provision.go:177] copyRemoteCerts
	I0317 12:41:59.717915  629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:41:59.717945  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:59.720628  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.720963  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.720991  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.721175  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:59.721379  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.721505  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:59.722126  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:41:59.804879  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 12:41:59.826876  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:41:59.848221  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:41:59.869105  629808 provision.go:87] duration metric: took 617.505097ms to configureAuth
	I0317 12:41:59.869139  629808 buildroot.go:189] setting minikube options for container-runtime
	I0317 12:41:59.869333  629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:41:59.869412  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:41:59.871922  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.872303  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:41:59.872333  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:41:59.872471  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:41:59.872666  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.872851  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:41:59.872954  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:41:59.873116  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:41:59.873310  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:41:59.873328  629808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 12:42:00.092202  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 12:42:00.092245  629808 main.go:141] libmachine: Checking connection to Docker...
	I0317 12:42:00.092258  629808 main.go:141] libmachine: (addons-012915) Calling .GetURL
	I0317 12:42:00.093695  629808 main.go:141] libmachine: (addons-012915) DBG | using libvirt version 6000000
	I0317 12:42:00.095810  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.096173  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.096200  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.096400  629808 main.go:141] libmachine: Docker is up and running!
	I0317 12:42:00.096414  629808 main.go:141] libmachine: Reticulating splines...
	I0317 12:42:00.096423  629808 client.go:171] duration metric: took 24.196550306s to LocalClient.Create
	I0317 12:42:00.096450  629808 start.go:167] duration metric: took 24.196627505s to libmachine.API.Create "addons-012915"
	I0317 12:42:00.096465  629808 start.go:293] postStartSetup for "addons-012915" (driver="kvm2")
	I0317 12:42:00.096479  629808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:42:00.096502  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:00.096741  629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:42:00.096766  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:00.098721  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.099034  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.099058  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.099255  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:00.099455  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:00.099615  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:00.099777  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:00.185091  629808 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:42:00.189105  629808 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 12:42:00.189137  629808 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 12:42:00.189228  629808 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 12:42:00.189256  629808 start.go:296] duration metric: took 92.784132ms for postStartSetup
	I0317 12:42:00.189291  629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
	I0317 12:42:00.189900  629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
	I0317 12:42:00.192549  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.192922  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.192951  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.193206  629808 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json ...
	I0317 12:42:00.193390  629808 start.go:128] duration metric: took 24.311527321s to createHost
	I0317 12:42:00.193418  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:00.195761  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.196087  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.196111  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.196236  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:00.196395  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:00.196515  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:00.196611  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:00.196714  629808 main.go:141] libmachine: Using SSH client type: native
	I0317 12:42:00.196892  629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0317 12:42:00.196900  629808 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 12:42:00.303756  629808 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742215320.281896410
	
	I0317 12:42:00.303779  629808 fix.go:216] guest clock: 1742215320.281896410
	I0317 12:42:00.303789  629808 fix.go:229] Guest: 2025-03-17 12:42:00.28189641 +0000 UTC Remote: 2025-03-17 12:42:00.193403202 +0000 UTC m=+24.415050746 (delta=88.493208ms)
	I0317 12:42:00.303809  629808 fix.go:200] guest clock delta is within tolerance: 88.493208ms
	I0317 12:42:00.303821  629808 start.go:83] releasing machines lock for "addons-012915", held for 24.42203961s
	I0317 12:42:00.303846  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:00.304105  629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
	I0317 12:42:00.308079  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.308414  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.308434  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.308552  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:00.309037  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:00.309200  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:00.309301  629808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 12:42:00.309349  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:00.309356  629808 ssh_runner.go:195] Run: cat /version.json
	I0317 12:42:00.309371  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:00.311994  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.312177  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.312318  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.312348  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.312514  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:00.312579  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:00.312604  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:00.312713  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:00.312793  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:00.312874  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:00.313002  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:00.313006  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:00.313125  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:00.313232  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:00.411767  629808 ssh_runner.go:195] Run: systemctl --version
	I0317 12:42:00.417321  629808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 12:42:00.569133  629808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 12:42:00.574456  629808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 12:42:00.574543  629808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:42:00.589077  629808 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 12:42:00.589107  629808 start.go:495] detecting cgroup driver to use...
	I0317 12:42:00.589187  629808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 12:42:00.604322  629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:42:00.616427  629808 docker.go:217] disabling cri-docker service (if available) ...
	I0317 12:42:00.616485  629808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 12:42:00.628419  629808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 12:42:00.640317  629808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 12:42:00.750344  629808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 12:42:00.870843  629808 docker.go:233] disabling docker service ...
	I0317 12:42:00.870929  629808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 12:42:00.884767  629808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 12:42:00.897097  629808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 12:42:01.031545  629808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 12:42:01.136508  629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 12:42:01.149270  629808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:42:01.165797  629808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 12:42:01.165881  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.175015  629808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 12:42:01.175091  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.184401  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.193730  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.202925  629808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:42:01.212622  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.222046  629808 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.238009  629808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 12:42:01.247198  629808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:42:01.255480  629808 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:42:01.255576  629808 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 12:42:01.268145  629808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:42:01.284086  629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:42:01.383226  629808 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 12:42:01.470381  629808 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 12:42:01.470484  629808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 12:42:01.474769  629808 start.go:563] Will wait 60s for crictl version
	I0317 12:42:01.474845  629808 ssh_runner.go:195] Run: which crictl
	I0317 12:42:01.478396  629808 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:42:01.510326  629808 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 12:42:01.510439  629808 ssh_runner.go:195] Run: crio --version
	I0317 12:42:01.535733  629808 ssh_runner.go:195] Run: crio --version
	I0317 12:42:01.562912  629808 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 12:42:01.564090  629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
	I0317 12:42:01.566854  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:01.567314  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:01.567346  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:01.567556  629808 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 12:42:01.571386  629808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:42:01.583014  629808 kubeadm.go:883] updating cluster {Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:42:01.583132  629808 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 12:42:01.583176  629808 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:42:01.612830  629808 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 12:42:01.612901  629808 ssh_runner.go:195] Run: which lz4
	I0317 12:42:01.616545  629808 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 12:42:01.620275  629808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 12:42:01.620307  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 12:42:02.725347  629808 crio.go:462] duration metric: took 1.108859948s to copy over tarball
	I0317 12:42:02.725449  629808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 12:42:04.776691  629808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051203848s)
	I0317 12:42:04.776729  629808 crio.go:469] duration metric: took 2.051345132s to extract the tarball
	I0317 12:42:04.776741  629808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 12:42:04.812682  629808 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:42:04.849626  629808 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 12:42:04.849655  629808 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:42:04.849664  629808 kubeadm.go:934] updating node { 192.168.39.84 8443 v1.32.2 crio true true} ...
	I0317 12:42:04.849766  629808 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:42:04.849831  629808 ssh_runner.go:195] Run: crio config
	I0317 12:42:04.893223  629808 cni.go:84] Creating CNI manager for ""
	I0317 12:42:04.893248  629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 12:42:04.893263  629808 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:42:04.893299  629808 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012915 NodeName:addons-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:42:04.893448  629808 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.84"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:42:04.893510  629808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:42:04.902894  629808 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:42:04.902960  629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:42:04.911411  629808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0317 12:42:04.925933  629808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:42:04.940481  629808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0317 12:42:04.954829  629808 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0317 12:42:04.958146  629808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:42:04.968628  629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:42:05.094744  629808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:42:05.110683  629808 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915 for IP: 192.168.39.84
	I0317 12:42:05.110714  629808 certs.go:194] generating shared ca certs ...
	I0317 12:42:05.110742  629808 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.110931  629808 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 12:42:05.379441  629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt ...
	I0317 12:42:05.379473  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt: {Name:mk2306d0b5e6b3bdf09b5ca5ba5b5152a8f33e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.379664  629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key ...
	I0317 12:42:05.379676  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key: {Name:mkb98ae874d2a940cd7999188309d5d4cc1e9840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.379750  629808 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 12:42:05.700832  629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt ...
	I0317 12:42:05.700866  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt: {Name:mk83b0d2bb202059ad3d6722f9760f4c15d9c03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.701026  629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key ...
	I0317 12:42:05.701037  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key: {Name:mk602d23e5060990faa0c18974e298bb57706e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.701105  629808 certs.go:256] generating profile certs ...
	I0317 12:42:05.701163  629808 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key
	I0317 12:42:05.701177  629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt with IP's: []
	I0317 12:42:05.801791  629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt ...
	I0317 12:42:05.801824  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: {Name:mka889b589d3c797ca759bcd90957695b79ec05d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.801982  629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key ...
	I0317 12:42:05.801991  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key: {Name:mk5aaafc7ac86a71fc676105265d7773ed4cfc8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.802057  629808 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607
	I0317 12:42:05.802075  629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.84]
	I0317 12:42:05.900875  629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 ...
	I0317 12:42:05.900907  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607: {Name:mk59b6f012ba178489c0edd13e4415d60ccfb251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.901067  629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607 ...
	I0317 12:42:05.901080  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607: {Name:mk733e1f058e5871887fb33ce0e670b37e9cd10d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:05.901207  629808 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt
	I0317 12:42:05.901302  629808 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key
	I0317 12:42:05.901362  629808 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key
	I0317 12:42:05.901382  629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt with IP's: []
	I0317 12:42:06.112847  629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt ...
	I0317 12:42:06.112879  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt: {Name:mkb27a59e02d9b31ea54c91b5510eee6e4048918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:06.113056  629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key ...
	I0317 12:42:06.113092  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key: {Name:mk8a39b41639853a86ca6dd154a950a5dbf083dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:06.113315  629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 12:42:06.113362  629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 12:42:06.113397  629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 12:42:06.113428  629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 12:42:06.114250  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:42:06.136735  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:42:06.157459  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:42:06.178356  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 12:42:06.199169  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 12:42:06.220060  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 12:42:06.241302  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:42:06.262133  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 12:42:06.282598  629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:42:06.302963  629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:42:06.317290  629808 ssh_runner.go:195] Run: openssl version
	I0317 12:42:06.322385  629808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:42:06.331944  629808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:42:06.335868  629808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:42:06.335925  629808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:42:06.341035  629808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:42:06.350789  629808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:42:06.354231  629808 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:42:06.354297  629808 kubeadm.go:392] StartCluster: {Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:42:06.354376  629808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 12:42:06.354428  629808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 12:42:06.386460  629808 cri.go:89] found id: ""
	I0317 12:42:06.386562  629808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:42:06.395386  629808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:42:06.403673  629808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:42:06.411982  629808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:42:06.412001  629808 kubeadm.go:157] found existing configuration files:
	
	I0317 12:42:06.412040  629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 12:42:06.420062  629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:42:06.420125  629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:42:06.428260  629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 12:42:06.436112  629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:42:06.436164  629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:42:06.444055  629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 12:42:06.452100  629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:42:06.452140  629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:42:06.460408  629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 12:42:06.468180  629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:42:06.468220  629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:42:06.476376  629808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 12:42:06.525854  629808 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:42:06.525968  629808 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:42:06.614805  629808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:42:06.614972  629808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:42:06.615078  629808 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:42:06.622002  629808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:42:06.674280  629808 out.go:235]   - Generating certificates and keys ...
	I0317 12:42:06.674406  629808 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:42:06.674481  629808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:42:06.727095  629808 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:42:06.903647  629808 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:42:07.175177  629808 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:42:07.322027  629808 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:42:07.494521  629808 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:42:07.494655  629808 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-012915 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I0317 12:42:07.743681  629808 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:42:07.743852  629808 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-012915 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I0317 12:42:08.031224  629808 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:42:08.255766  629808 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:42:08.393770  629808 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:42:08.393843  629808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:42:08.504444  629808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:42:08.582587  629808 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:42:08.727062  629808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:42:08.905914  629808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:42:09.025638  629808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:42:09.026109  629808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:42:09.028540  629808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:42:09.100251  629808 out.go:235]   - Booting up control plane ...
	I0317 12:42:09.100392  629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:42:09.100458  629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:42:09.100579  629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:42:09.100675  629808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:42:09.100759  629808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:42:09.100835  629808 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:42:09.178731  629808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:42:09.178869  629808 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:42:09.680249  629808 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.983227ms
	I0317 12:42:09.680361  629808 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:42:14.679505  629808 kubeadm.go:310] [api-check] The API server is healthy after 5.000948176s
	I0317 12:42:14.690733  629808 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:42:14.706164  629808 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:42:14.736742  629808 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:42:14.736933  629808 kubeadm.go:310] [mark-control-plane] Marking the node addons-012915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:42:14.748800  629808 kubeadm.go:310] [bootstrap-token] Using token: gkbgr9.pbwmys15tasd3j3c
	I0317 12:42:14.750107  629808 out.go:235]   - Configuring RBAC rules ...
	I0317 12:42:14.750283  629808 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:42:14.765340  629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:42:14.773817  629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:42:14.776778  629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:42:14.780256  629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:42:14.786860  629808 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:42:15.083348  629808 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:42:15.514277  629808 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:42:16.083840  629808 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:42:16.083869  629808 kubeadm.go:310] 
	I0317 12:42:16.083944  629808 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:42:16.083951  629808 kubeadm.go:310] 
	I0317 12:42:16.084035  629808 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:42:16.084044  629808 kubeadm.go:310] 
	I0317 12:42:16.084089  629808 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:42:16.084183  629808 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:42:16.084292  629808 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:42:16.084322  629808 kubeadm.go:310] 
	I0317 12:42:16.084410  629808 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:42:16.084421  629808 kubeadm.go:310] 
	I0317 12:42:16.084492  629808 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:42:16.084501  629808 kubeadm.go:310] 
	I0317 12:42:16.084572  629808 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:42:16.084695  629808 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:42:16.084810  629808 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:42:16.084828  629808 kubeadm.go:310] 
	I0317 12:42:16.084950  629808 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:42:16.085055  629808 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:42:16.085069  629808 kubeadm.go:310] 
	I0317 12:42:16.085191  629808 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gkbgr9.pbwmys15tasd3j3c \
	I0317 12:42:16.085351  629808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 12:42:16.085380  629808 kubeadm.go:310] 	--control-plane 
	I0317 12:42:16.085387  629808 kubeadm.go:310] 
	I0317 12:42:16.085482  629808 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:42:16.085495  629808 kubeadm.go:310] 
	I0317 12:42:16.085598  629808 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gkbgr9.pbwmys15tasd3j3c \
	I0317 12:42:16.085733  629808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 12:42:16.086060  629808 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:42:16.086100  629808 cni.go:84] Creating CNI manager for ""
	I0317 12:42:16.086122  629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 12:42:16.088651  629808 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 12:42:16.089943  629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 12:42:16.100164  629808 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 12:42:16.117778  629808 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:42:16.117949  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:16.117959  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012915 minikube.k8s.io/updated_at=2025_03_17T12_42_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=addons-012915 minikube.k8s.io/primary=true
	I0317 12:42:16.149435  629808 ops.go:34] apiserver oom_adj: -16
	I0317 12:42:16.249953  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:16.750311  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:17.250948  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:17.750360  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:18.250729  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:18.751047  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:19.250436  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:19.750852  629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:42:19.825690  629808 kubeadm.go:1113] duration metric: took 3.707804893s to wait for elevateKubeSystemPrivileges
	I0317 12:42:19.825736  629808 kubeadm.go:394] duration metric: took 13.47144427s to StartCluster
	I0317 12:42:19.825764  629808 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:19.825918  629808 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:42:19.826573  629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:42:19.826839  629808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:42:19.826860  629808 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 12:42:19.826901  629808 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0317 12:42:19.827033  629808 addons.go:69] Setting cloud-spanner=true in profile "addons-012915"
	I0317 12:42:19.827043  629808 addons.go:69] Setting yakd=true in profile "addons-012915"
	I0317 12:42:19.827050  629808 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-012915"
	I0317 12:42:19.827064  629808 addons.go:238] Setting addon cloud-spanner=true in "addons-012915"
	I0317 12:42:19.827070  629808 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-012915"
	I0317 12:42:19.827087  629808 addons.go:69] Setting storage-provisioner=true in profile "addons-012915"
	I0317 12:42:19.827100  629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:42:19.827110  629808 addons.go:238] Setting addon storage-provisioner=true in "addons-012915"
	I0317 12:42:19.827114  629808 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012915"
	I0317 12:42:19.827125  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827132  629808 addons.go:69] Setting volcano=true in profile "addons-012915"
	I0317 12:42:19.827135  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827145  629808 addons.go:238] Setting addon volcano=true in "addons-012915"
	I0317 12:42:19.827091  629808 addons.go:69] Setting metrics-server=true in profile "addons-012915"
	I0317 12:42:19.827158  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827163  629808 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-012915"
	I0317 12:42:19.827163  629808 addons.go:238] Setting addon metrics-server=true in "addons-012915"
	I0317 12:42:19.827192  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827194  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827772  629808 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012915"
	I0317 12:42:19.827804  629808 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012915"
	I0317 12:42:19.827939  629808 addons.go:238] Setting addon yakd=true in "addons-012915"
	I0317 12:42:19.827963  629808 addons.go:69] Setting volumesnapshots=true in profile "addons-012915"
	I0317 12:42:19.827980  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.827981  629808 addons.go:238] Setting addon volumesnapshots=true in "addons-012915"
	I0317 12:42:19.828008  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.828051  629808 addons.go:69] Setting ingress-dns=true in profile "addons-012915"
	I0317 12:42:19.828067  629808 addons.go:238] Setting addon ingress-dns=true in "addons-012915"
	I0317 12:42:19.828098  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.828351  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.828406  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.828457  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.828501  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.828541  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.828558  629808 addons.go:69] Setting default-storageclass=true in profile "addons-012915"
	I0317 12:42:19.828582  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.828592  629808 addons.go:69] Setting gcp-auth=true in profile "addons-012915"
	I0317 12:42:19.828612  629808 mustload.go:65] Loading cluster: addons-012915
	I0317 12:42:19.828811  629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:42:19.829005  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.829039  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.829154  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.829181  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.828582  629808 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012915"
	I0317 12:42:19.830132  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.830169  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.830815  629808 addons.go:69] Setting ingress=true in profile "addons-012915"
	I0317 12:42:19.830869  629808 addons.go:238] Setting addon ingress=true in "addons-012915"
	I0317 12:42:19.830900  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.830952  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.830959  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.828542  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.835846  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.836947  629808 out.go:177] * Verifying Kubernetes components...
	I0317 12:42:19.827115  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.836989  629808 addons.go:69] Setting registry=true in profile "addons-012915"
	I0317 12:42:19.837270  629808 addons.go:238] Setting addon registry=true in "addons-012915"
	I0317 12:42:19.837308  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.837426  629808 addons.go:69] Setting inspektor-gadget=true in profile "addons-012915"
	I0317 12:42:19.837446  629808 addons.go:238] Setting addon inspektor-gadget=true in "addons-012915"
	I0317 12:42:19.837470  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.838157  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.838215  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.838752  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.838791  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.839167  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.839195  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.842685  629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:42:19.836974  629808 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012915"
	I0317 12:42:19.842826  629808 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-012915"
	I0317 12:42:19.842861  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.852179  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.852215  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.852371  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0317 12:42:19.852533  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0317 12:42:19.852632  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
	I0317 12:42:19.853167  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.853218  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.859871  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.860222  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.860649  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0317 12:42:19.860800  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.860810  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.860835  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.860945  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.860985  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.861231  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.865822  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0317 12:42:19.866268  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.866289  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.866339  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.866349  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.866499  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.866579  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.866762  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.866829  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.867398  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.867417  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.867495  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.867866  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.867890  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.868441  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.868478  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.882595  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0317 12:42:19.882625  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.882595  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.882842  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.882855  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.882844  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0317 12:42:19.882917  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.883004  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.883032  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0317 12:42:19.883055  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.883291  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.883668  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.883692  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.883843  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.884024  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.884118  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.884467  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.884485  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.884560  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.884585  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.884600  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.884878  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.885011  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.885427  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.885464  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.885775  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.886130  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.886150  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.886668  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.886687  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.887884  629808 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-012915"
	I0317 12:42:19.887936  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.888282  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.888329  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.891993  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.892014  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.892695  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.892905  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.895518  629808 addons.go:238] Setting addon default-storageclass=true in "addons-012915"
	I0317 12:42:19.895598  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:19.895963  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.896001  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.901208  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0317 12:42:19.901854  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.902376  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.902398  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.902822  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.903362  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.903405  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.903635  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0317 12:42:19.903998  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.904452  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.904477  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.904857  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.905354  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.905400  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.905596  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I0317 12:42:19.906174  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.906731  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.906747  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.907438  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.908055  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.908095  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.917817  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0317 12:42:19.918439  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.918998  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.919018  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.919423  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.919990  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.920035  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.920245  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I0317 12:42:19.920738  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.921250  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.921272  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.921460  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0317 12:42:19.921904  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39827
	I0317 12:42:19.922045  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I0317 12:42:19.922252  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.922388  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.922418  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.922787  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.922805  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.922936  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.922946  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.923006  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.923411  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.923428  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.923492  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.923851  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.924038  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.924064  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.924225  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.925736  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0317 12:42:19.926245  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.926454  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.926692  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.926710  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.927080  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.927735  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.927779  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.928378  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.928551  629808 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0317 12:42:19.929156  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0317 12:42:19.929480  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.929906  629808 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:42:19.929927  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0317 12:42:19.929951  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.930014  629808 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0317 12:42:19.930050  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.930084  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.930707  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.930815  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0317 12:42:19.931663  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.931682  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.931752  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.932030  629808 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0317 12:42:19.932048  629808 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0317 12:42:19.932075  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.932524  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.933463  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
	I0317 12:42:19.933523  629808 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0317 12:42:19.934938  629808 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0317 12:42:19.934957  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0317 12:42:19.934975  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.935054  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.935837  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.935917  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.935936  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.936112  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.936327  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.936369  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.936548  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.938207  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.938230  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.938699  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.938885  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.939030  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.939163  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.939214  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.939621  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.939644  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.939671  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0317 12:42:19.943034  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.943108  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0317 12:42:19.943614  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.943675  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0317 12:42:19.943829  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.944001  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.944052  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0317 12:42:19.945245  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I0317 12:42:19.955871  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0317 12:42:19.955882  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0317 12:42:19.955872  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0317 12:42:19.956072  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.956384  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.956450  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.956881  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.956895  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.956930  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.956949  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.956953  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.956989  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.957447  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.957596  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.957680  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.958049  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958181  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.958235  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958304  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958362  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958482  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.958491  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.958540  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.958590  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958647  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.958763  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.958781  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.958837  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.959116  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.959252  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.959273  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.959396  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.959411  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.959545  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.959557  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.959613  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.959836  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.959855  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.959989  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.960002  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.960066  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.960113  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.960588  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.960653  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.960691  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.961414  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.961509  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.961528  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.961975  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.962019  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.962917  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.962954  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.962960  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.963099  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.963168  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.963211  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.963487  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:19.963502  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:19.963821  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:19.963840  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:19.963850  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:19.963857  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:19.964272  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:19.964308  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:19.965700  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.966017  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.966028  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.966072  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:19.966727  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:19.966744  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:19.966755  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	W0317 12:42:19.966832  629808 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0317 12:42:19.967953  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.968312  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0317 12:42:19.968333  629808 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0317 12:42:19.968312  629808 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0317 12:42:19.968711  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.969543  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0317 12:42:19.969570  629808 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0317 12:42:19.969591  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.969547  629808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:42:19.970459  629808 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:42:19.970475  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0317 12:42:19.970493  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.971403  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0317 12:42:19.971545  629808 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0317 12:42:19.971563  629808 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0317 12:42:19.971596  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.974165  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0317 12:42:19.975082  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.975155  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.975322  629808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0317 12:42:19.975701  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.975725  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.976486  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.976531  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.976757  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.976879  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.976966  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.978118  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.978332  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.978503  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.978656  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.979122  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0317 12:42:19.979194  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.979223  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.979814  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.979835  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.980282  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.980650  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.980825  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.980928  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.981941  629808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:42:19.982030  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0317 12:42:19.983757  629808 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:42:19.983777  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0317 12:42:19.983795  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.983891  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0317 12:42:19.984900  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0317 12:42:19.985200  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0317 12:42:19.985580  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.986093  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.986116  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.986908  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0317 12:42:19.987216  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.987552  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.987728  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.987744  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.987893  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0317 12:42:19.988221  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.988324  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.988338  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.988439  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.989250  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.989309  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.989508  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.990069  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0317 12:42:19.990422  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0317 12:42:19.990990  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:19.991370  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:19.991386  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:19.991649  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.991716  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:19.991893  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.992193  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.992212  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.992419  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.992604  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.992801  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.992870  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.992883  629808 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0317 12:42:19.993058  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.993343  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:19.993481  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.994094  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.994165  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0317 12:42:19.994180  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0317 12:42:19.994194  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.994825  629808 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0317 12:42:19.994841  629808 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0317 12:42:19.995338  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:19.995483  629808 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0317 12:42:19.996229  629808 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0317 12:42:19.996258  629808 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0317 12:42:19.996278  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.996907  629808 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:42:19.996991  629808 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:42:19.997014  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0317 12:42:19.997036  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.997086  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.997546  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:19.997580  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:19.997816  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:19.997996  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:19.998061  629808 out.go:177]   - Using image docker.io/registry:2.8.3
	I0317 12:42:19.998118  629808 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:42:19.998137  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:42:19.998152  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:19.998306  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:19.998475  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:19.999203  629808 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0317 12:42:19.999219  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0317 12:42:19.999235  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:20.000388  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.001160  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.001210  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.001233  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.001900  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.001905  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.002160  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.002517  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.002624  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.002654  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.003060  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.003254  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.003284  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.003314  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.003483  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.003671  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.004058  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.004081  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.004064  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.004084  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.004102  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.004147  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.004251  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.004290  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.004452  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.004493  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.004579  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.004659  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.007324  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0317 12:42:20.007754  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:20.008292  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:20.008310  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:20.008690  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:20.008929  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:20.009476  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I0317 12:42:20.009916  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:20.010504  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:20.010522  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:20.010592  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:20.010967  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:20.011152  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:20.012140  629808 out.go:177]   - Using image docker.io/busybox:stable
	I0317 12:42:20.012350  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:20.012680  629808 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:42:20.012695  629808 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:42:20.012715  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:20.014335  629808 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0317 12:42:20.015952  629808 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:42:20.015969  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0317 12:42:20.015986  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:20.016330  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.016586  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.016604  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.016848  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.017047  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.017287  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.017471  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.018671  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.018984  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:20.019006  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:20.019176  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:20.019322  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:20.019412  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:20.019490  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:20.297248  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:42:20.297619  629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0317 12:42:20.297640  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0317 12:42:20.312103  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:42:20.379295  629808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:42:20.382301  629808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:42:20.414819  629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0317 12:42:20.414841  629808 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0317 12:42:20.415472  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0317 12:42:20.415497  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0317 12:42:20.416541  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0317 12:42:20.491901  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:42:20.498734  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:42:20.508832  629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0317 12:42:20.508860  629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0317 12:42:20.513871  629808 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:42:20.513888  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0317 12:42:20.546612  629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:42:20.546648  629808 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0317 12:42:20.565431  629808 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0317 12:42:20.565460  629808 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0317 12:42:20.596812  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:42:20.630889  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0317 12:42:20.630926  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0317 12:42:20.636623  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:42:20.640045  629808 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0317 12:42:20.640066  629808 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0317 12:42:20.641874  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:42:20.701791  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:42:20.743417  629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0317 12:42:20.743455  629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0317 12:42:20.771844  629808 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:42:20.771878  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0317 12:42:20.788611  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:42:20.870858  629808 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0317 12:42:20.870897  629808 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0317 12:42:20.912079  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0317 12:42:20.912114  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0317 12:42:20.987265  629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0317 12:42:20.987295  629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0317 12:42:21.065518  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:42:21.117210  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0317 12:42:21.117257  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0317 12:42:21.130122  629808 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0317 12:42:21.130158  629808 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0317 12:42:21.190751  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0317 12:42:21.190786  629808 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0317 12:42:21.282504  629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0317 12:42:21.282538  629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0317 12:42:21.294555  629808 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:42:21.294585  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0317 12:42:21.424580  629808 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:42:21.424614  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0317 12:42:21.493943  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0317 12:42:21.493972  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0317 12:42:21.568812  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:42:21.666158  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:42:21.728233  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0317 12:42:21.728263  629808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0317 12:42:22.095704  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0317 12:42:22.095729  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0317 12:42:22.305434  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0317 12:42:22.305459  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0317 12:42:22.622253  629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:42:22.622280  629808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0317 12:42:22.855937  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:42:23.365879  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.068590459s)
	I0317 12:42:23.365936  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.365938  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.053794114s)
	I0317 12:42:23.365948  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.365991  629808 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.986673397s)
	I0317 12:42:23.365985  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.366044  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.366075  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.94950966s)
	I0317 12:42:23.366103  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.366112  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.366045  629808 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.983720222s)
	I0317 12:42:23.366133  629808 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0317 12:42:23.366493  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.366514  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.366524  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.366535  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.366741  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.366781  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.366805  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.366822  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.367381  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:23.367392  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.367406  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.367433  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.367444  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.367452  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:23.367458  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:23.367553  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:23.367580  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.367586  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.367609  629808 node_ready.go:35] waiting up to 6m0s for node "addons-012915" to be "Ready" ...
	I0317 12:42:23.367781  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:23.367808  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:23.367814  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:23.367406  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:23.382262  629808 node_ready.go:49] node "addons-012915" has status "Ready":"True"
	I0317 12:42:23.382287  629808 node_ready.go:38] duration metric: took 14.660522ms for node "addons-012915" to be "Ready" ...
	I0317 12:42:23.382297  629808 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:42:23.525554  629808 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:23.966938  629808 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-012915" context rescaled to 1 replicas
	I0317 12:42:25.574348  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.08239362s)
	I0317 12:42:25.574410  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.574424  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.574418  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.075649671s)
	I0317 12:42:25.574456  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.977607547s)
	I0317 12:42:25.574465  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.574478  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.574483  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.574499  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.574747  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.574770  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.574781  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.574789  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.574853  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.574873  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.574884  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.574882  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:25.574892  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.574893  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:25.574966  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:25.574991  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.574998  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.575584  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.575602  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.575616  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.575624  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.575809  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.575816  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.576578  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.576591  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.582161  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:25.628507  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:25.628539  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:25.628817  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:25.628838  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:25.628866  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:26.794443  629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0317 12:42:26.794485  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:26.798095  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:26.798474  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:26.798506  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:26.798762  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:26.798993  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:26.799159  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:26.799292  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:27.164818  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.528149942s)
	I0317 12:42:27.164874  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.522968285s)
	I0317 12:42:27.164892  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.164906  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.164918  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.164932  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.164956  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.463136829s)
	I0317 12:42:27.164983  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.164997  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165078  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.376434924s)
	I0317 12:42:27.165112  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.165125  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165126  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.099553539s)
	I0317 12:42:27.165198  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.165210  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.165219  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.165223  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.165228  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.165237  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165240  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.165249  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.165251  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.165256  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165344  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.596500278s)
	I0317 12:42:27.165368  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.165387  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165387  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.165409  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.165536  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.499341622s)
	W0317 12:42:27.165575  629808 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:42:27.165605  629808 retry.go:31] will retry after 176.800041ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:42:27.165656  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.165671  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.166049  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.166066  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.166084  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.166096  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.166104  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.166110  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.166155  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.166161  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.166406  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.166445  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.166452  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.166462  629808 addons.go:479] Verifying addon registry=true in "addons-012915"
	I0317 12:42:27.167873  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.167896  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.167896  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.167906  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.167916  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.167923  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.167942  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.167874  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.167970  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.167983  629808 addons.go:479] Verifying addon ingress=true in "addons-012915"
	I0317 12:42:27.167960  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.168391  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.168486  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.168508  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.169872  629808 out.go:177] * Verifying registry addon...
	I0317 12:42:27.169903  629808 out.go:177] * Verifying ingress addon...
	I0317 12:42:27.170046  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.170061  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.170894  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.170911  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.170920  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.171133  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.171150  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.171160  629808 addons.go:479] Verifying addon metrics-server=true in "addons-012915"
	I0317 12:42:27.171239  629808 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-012915 service yakd-dashboard -n yakd-dashboard
	
	I0317 12:42:27.172109  629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0317 12:42:27.172769  629808 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0317 12:42:27.177635  629808 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0317 12:42:27.177659  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:27.199182  629808 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0317 12:42:27.199224  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:27.213504  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:27.213528  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:27.213846  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:27.213868  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:27.213870  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:27.257729  629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0317 12:42:27.338908  629808 addons.go:238] Setting addon gcp-auth=true in "addons-012915"
	I0317 12:42:27.338979  629808 host.go:66] Checking if "addons-012915" exists ...
	I0317 12:42:27.339273  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:27.339321  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:27.342552  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:42:27.354754  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0317 12:42:27.355186  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:27.355639  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:27.355661  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:27.356015  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:27.356672  629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:42:27.356719  629808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:42:27.371587  629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0317 12:42:27.372069  629808 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:42:27.372531  629808 main.go:141] libmachine: Using API Version  1
	I0317 12:42:27.372554  629808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:42:27.372950  629808 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:42:27.373153  629808 main.go:141] libmachine: (addons-012915) Calling .GetState
	I0317 12:42:27.374804  629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
	I0317 12:42:27.375036  629808 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0317 12:42:27.375061  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
	I0317 12:42:27.377884  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:27.378284  629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
	I0317 12:42:27.378311  629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
	I0317 12:42:27.378514  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
	I0317 12:42:27.378705  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
	I0317 12:42:27.378881  629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
	I0317 12:42:27.379044  629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
	I0317 12:42:27.677030  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:27.677083  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:28.029599  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:28.183837  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:28.183933  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:28.431420  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.575421913s)
	I0317 12:42:28.431480  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:28.431501  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:28.431819  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:28.431838  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:28.431847  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:28.431855  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:28.432119  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:28.432133  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:28.432145  629808 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-012915"
	I0317 12:42:28.433650  629808 out.go:177] * Verifying csi-hostpath-driver addon...
	I0317 12:42:28.435688  629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0317 12:42:28.442551  629808 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0317 12:42:28.442565  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:28.680986  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:28.680984  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:28.940881  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:29.176120  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:29.177109  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:29.325544  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.982873895s)
	I0317 12:42:29.325638  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:29.325639  629808 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.950584255s)
	I0317 12:42:29.325661  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:29.327827  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:29.327920  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:29.327936  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:29.327957  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:29.328547  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:29.328704  629808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:42:29.328828  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:29.328878  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:29.328849  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:29.331392  629808 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0317 12:42:29.332532  629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0317 12:42:29.332561  629808 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0317 12:42:29.401043  629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0317 12:42:29.401083  629808 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0317 12:42:29.438225  629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:42:29.438258  629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0317 12:42:29.440504  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:29.461366  629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:42:29.675195  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:29.676295  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:29.939006  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:30.031059  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:30.175990  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:30.176498  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:30.469448  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:30.552315  629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.090899518s)
	I0317 12:42:30.552377  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:30.552388  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:30.552696  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:30.552722  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:30.552731  629808 main.go:141] libmachine: Making call to close driver server
	I0317 12:42:30.552730  629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
	I0317 12:42:30.552738  629808 main.go:141] libmachine: (addons-012915) Calling .Close
	I0317 12:42:30.553015  629808 main.go:141] libmachine: Successfully made call to close driver server
	I0317 12:42:30.553064  629808 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 12:42:30.555013  629808 addons.go:479] Verifying addon gcp-auth=true in "addons-012915"
	I0317 12:42:30.556730  629808 out.go:177] * Verifying gcp-auth addon...
	I0317 12:42:30.558828  629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0317 12:42:30.622204  629808 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0317 12:42:30.622235  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:30.678439  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:30.679400  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:30.939775  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:31.062414  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:31.176312  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:31.176920  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:31.440287  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:31.561808  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:31.676310  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:31.676358  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:31.939634  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:32.061464  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:32.176322  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:32.176412  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:32.440402  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:32.531758  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:32.562371  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:32.676892  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:32.677047  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:32.939380  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:33.062362  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:33.176279  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:33.176347  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:33.440019  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:33.561384  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:33.675452  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:33.676698  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:33.938913  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:34.061750  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:34.176662  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:34.176788  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:34.439125  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:34.531837  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:34.562453  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:34.675627  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:34.676407  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:34.940603  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:35.063505  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:35.176258  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:35.176291  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:35.440938  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:35.562777  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:35.677167  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:35.677299  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:35.940075  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:36.062337  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:36.175807  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:36.177068  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:36.440330  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:36.532236  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:36.562553  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:36.675869  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:36.676077  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:36.939587  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:37.063040  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:37.176394  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:37.176520  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:37.440253  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:37.562502  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:37.675630  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:37.676184  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:37.939384  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:38.061640  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:38.176072  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:38.176183  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:38.439348  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:38.561954  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:38.676913  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:38.676938  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:38.939643  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:39.422449  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:39.422586  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:39.422686  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:39.422975  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:39.520636  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:39.562354  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:39.675094  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:39.675924  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:39.940446  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:40.062659  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:40.175720  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:40.175995  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:40.439334  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:40.562819  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:40.677578  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:40.677791  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:40.939307  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:41.061692  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:41.175519  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:41.175887  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:41.438954  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:41.530801  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:41.562309  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:41.675437  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:41.676475  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:41.939835  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:42.062175  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:42.174934  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:42.176275  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:42.439362  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:42.561196  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:42.675522  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:42.675906  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:42.939700  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:43.478616  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:43.479087  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:43.479175  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:43.482070  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:43.531269  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:43.580013  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:43.680790  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:43.680825  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:43.938793  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:44.062400  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:44.174903  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:44.176586  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:44.439953  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:44.564885  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:44.677004  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:44.677958  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:44.939413  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:45.062087  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:45.177076  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:45.177120  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:45.439576  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:45.561384  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:45.675484  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:45.675493  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:45.939208  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:46.031082  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:46.062369  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:46.175314  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:46.179352  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:46.439784  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:46.562554  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:46.676489  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:46.677442  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:46.940295  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:47.061680  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:47.175742  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:47.176200  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:47.439919  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:47.562915  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:47.676865  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:47.677284  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:47.940421  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:48.031481  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:48.062882  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:48.176012  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:48.176584  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:48.439737  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:48.562843  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:48.676535  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:48.676536  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:48.938590  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:49.062279  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:49.175389  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:49.176884  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:49.440138  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:49.563016  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:49.677039  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:49.677038  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:49.938974  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:50.061896  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:50.176902  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:50.176970  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:50.439317  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:50.531001  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:50.561407  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:50.675227  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:50.675368  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:50.940528  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:51.069121  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:51.176732  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:51.176735  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:51.439065  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:51.562561  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:51.675384  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:51.676629  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:51.939444  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:52.062171  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:52.174863  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:52.175705  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:52.438522  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:52.531362  629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
	I0317 12:42:52.562292  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:52.674982  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:52.676730  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:52.938824  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:53.062392  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:53.176496  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:53.176519  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:53.440042  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:53.561654  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:53.675762  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:53.676165  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:53.942279  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:54.031943  629808 pod_ready.go:93] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.031969  629808 pod_ready.go:82] duration metric: took 30.506379989s for pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.031979  629808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.033529  629808 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jxb6r" not found
	I0317 12:42:54.033554  629808 pod_ready.go:82] duration metric: took 1.568065ms for pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace to be "Ready" ...
	E0317 12:42:54.033563  629808 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jxb6r" not found
	I0317 12:42:54.033572  629808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.038119  629808 pod_ready.go:93] pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.038138  629808 pod_ready.go:82] duration metric: took 4.557967ms for pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.038148  629808 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.042147  629808 pod_ready.go:93] pod "etcd-addons-012915" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.042163  629808 pod_ready.go:82] duration metric: took 4.010086ms for pod "etcd-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.042171  629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.045539  629808 pod_ready.go:93] pod "kube-apiserver-addons-012915" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.045556  629808 pod_ready.go:82] duration metric: took 3.379345ms for pod "kube-apiserver-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.045564  629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.061403  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:54.175296  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:54.175361  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:54.230108  629808 pod_ready.go:93] pod "kube-controller-manager-addons-012915" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.230135  629808 pod_ready.go:82] duration metric: took 184.565575ms for pod "kube-controller-manager-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.230148  629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gfpml" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.438989  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:54.562061  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:54.629571  629808 pod_ready.go:93] pod "kube-proxy-gfpml" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:54.629598  629808 pod_ready.go:82] duration metric: took 399.442811ms for pod "kube-proxy-gfpml" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.629611  629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:54.675395  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:54.676040  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:54.940219  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:55.029161  629808 pod_ready.go:93] pod "kube-scheduler-addons-012915" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:55.029189  629808 pod_ready.go:82] duration metric: took 399.569707ms for pod "kube-scheduler-addons-012915" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:55.029204  629808 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:55.061870  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:55.176167  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:55.176890  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:55.429753  629808 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace has status "Ready":"True"
	I0317 12:42:55.429780  629808 pod_ready.go:82] duration metric: took 400.567661ms for pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace to be "Ready" ...
	I0317 12:42:55.429791  629808 pod_ready.go:39] duration metric: took 32.047481057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:42:55.429817  629808 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:42:55.429887  629808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:42:55.438618  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:55.467640  629808 api_server.go:72] duration metric: took 35.64073254s to wait for apiserver process to appear ...
	I0317 12:42:55.467677  629808 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:42:55.467704  629808 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0317 12:42:55.471885  629808 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0317 12:42:55.472829  629808 api_server.go:141] control plane version: v1.32.2
	I0317 12:42:55.472852  629808 api_server.go:131] duration metric: took 5.168362ms to wait for apiserver health ...
	I0317 12:42:55.472860  629808 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:42:55.562345  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:55.631728  629808 system_pods.go:59] 18 kube-system pods found
	I0317 12:42:55.631787  629808 system_pods.go:61] "amd-gpu-device-plugin-5pkbv" [8713b029-97c0-4a95-a703-886b238a1cf1] Running
	I0317 12:42:55.631799  629808 system_pods.go:61] "coredns-668d6bf9bc-z7dq4" [0a5b10dc-42b3-4a25-9f03-222b3324baf9] Running
	I0317 12:42:55.631810  629808 system_pods.go:61] "csi-hostpath-attacher-0" [592324f8-e091-4a7a-a486-93d54a56c0f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0317 12:42:55.631819  629808 system_pods.go:61] "csi-hostpath-resizer-0" [610b15f2-2636-4b8f-9363-d7c6eca55342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0317 12:42:55.631834  629808 system_pods.go:61] "csi-hostpathplugin-n92cl" [6895b42e-cbbb-4c89-93d5-601d91db4e4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:42:55.631846  629808 system_pods.go:61] "etcd-addons-012915" [8e4cb301-ecab-427b-af78-4451c425dc9e] Running
	I0317 12:42:55.631853  629808 system_pods.go:61] "kube-apiserver-addons-012915" [7a0533b2-7ed1-4fb2-9377-e781b3a3b8a4] Running
	I0317 12:42:55.631861  629808 system_pods.go:61] "kube-controller-manager-addons-012915" [221489e4-a706-4054-817b-097f795a4c7b] Running
	I0317 12:42:55.631867  629808 system_pods.go:61] "kube-ingress-dns-minikube" [7fc15951-f543-49f2-aa75-cfec8ee9f60a] Running
	I0317 12:42:55.631875  629808 system_pods.go:61] "kube-proxy-gfpml" [c443023b-cd1a-4c68-95ea-21e945f88e15] Running
	I0317 12:42:55.631881  629808 system_pods.go:61] "kube-scheduler-addons-012915" [42e2b951-6247-4322-9e01-755f18bd2c8f] Running
	I0317 12:42:55.631894  629808 system_pods.go:61] "metrics-server-7fbb699795-p2svs" [7cb96dd5-6d04-4b62-a0c5-af14472757d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0317 12:42:55.631900  629808 system_pods.go:61] "nvidia-device-plugin-daemonset-gr4p2" [c678dd53-1e45-417a-b06d-c754b6a9ace2] Running
	I0317 12:42:55.631909  629808 system_pods.go:61] "registry-6c88467877-8k6gk" [f211c29d-606d-447b-a8fa-69017766f2db] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0317 12:42:55.631921  629808 system_pods.go:61] "registry-proxy-7r5g2" [ada328aa-4416-4e30-a5df-7dc790f2663a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0317 12:42:55.631932  629808 system_pods.go:61] "snapshot-controller-68b874b76f-6rz9m" [10d4e550-08b3-4311-afc0-fbaa5490aa26] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0317 12:42:55.631938  629808 system_pods.go:61] "snapshot-controller-68b874b76f-9q74h" [a32b085e-c793-429b-9098-8df28c689c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0317 12:42:55.631947  629808 system_pods.go:61] "storage-provisioner" [c7f99af7-bb01-4504-ac70-77dc8ab04b3e] Running
	I0317 12:42:55.631958  629808 system_pods.go:74] duration metric: took 159.091098ms to wait for pod list to return data ...
	I0317 12:42:55.631973  629808 default_sa.go:34] waiting for default service account to be created ...
	I0317 12:42:55.676919  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:55.678428  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:55.829977  629808 default_sa.go:45] found service account: "default"
	I0317 12:42:55.830002  629808 default_sa.go:55] duration metric: took 198.018531ms for default service account to be created ...
	I0317 12:42:55.830010  629808 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 12:42:55.940797  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:56.043626  629808 system_pods.go:86] 18 kube-system pods found
	I0317 12:42:56.043656  629808 system_pods.go:89] "amd-gpu-device-plugin-5pkbv" [8713b029-97c0-4a95-a703-886b238a1cf1] Running
	I0317 12:42:56.043662  629808 system_pods.go:89] "coredns-668d6bf9bc-z7dq4" [0a5b10dc-42b3-4a25-9f03-222b3324baf9] Running
	I0317 12:42:56.043669  629808 system_pods.go:89] "csi-hostpath-attacher-0" [592324f8-e091-4a7a-a486-93d54a56c0f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0317 12:42:56.043676  629808 system_pods.go:89] "csi-hostpath-resizer-0" [610b15f2-2636-4b8f-9363-d7c6eca55342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0317 12:42:56.043683  629808 system_pods.go:89] "csi-hostpathplugin-n92cl" [6895b42e-cbbb-4c89-93d5-601d91db4e4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:42:56.043688  629808 system_pods.go:89] "etcd-addons-012915" [8e4cb301-ecab-427b-af78-4451c425dc9e] Running
	I0317 12:42:56.043691  629808 system_pods.go:89] "kube-apiserver-addons-012915" [7a0533b2-7ed1-4fb2-9377-e781b3a3b8a4] Running
	I0317 12:42:56.043695  629808 system_pods.go:89] "kube-controller-manager-addons-012915" [221489e4-a706-4054-817b-097f795a4c7b] Running
	I0317 12:42:56.043702  629808 system_pods.go:89] "kube-ingress-dns-minikube" [7fc15951-f543-49f2-aa75-cfec8ee9f60a] Running
	I0317 12:42:56.043705  629808 system_pods.go:89] "kube-proxy-gfpml" [c443023b-cd1a-4c68-95ea-21e945f88e15] Running
	I0317 12:42:56.043711  629808 system_pods.go:89] "kube-scheduler-addons-012915" [42e2b951-6247-4322-9e01-755f18bd2c8f] Running
	I0317 12:42:56.043715  629808 system_pods.go:89] "metrics-server-7fbb699795-p2svs" [7cb96dd5-6d04-4b62-a0c5-af14472757d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0317 12:42:56.043721  629808 system_pods.go:89] "nvidia-device-plugin-daemonset-gr4p2" [c678dd53-1e45-417a-b06d-c754b6a9ace2] Running
	I0317 12:42:56.043726  629808 system_pods.go:89] "registry-6c88467877-8k6gk" [f211c29d-606d-447b-a8fa-69017766f2db] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0317 12:42:56.043733  629808 system_pods.go:89] "registry-proxy-7r5g2" [ada328aa-4416-4e30-a5df-7dc790f2663a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0317 12:42:56.043740  629808 system_pods.go:89] "snapshot-controller-68b874b76f-6rz9m" [10d4e550-08b3-4311-afc0-fbaa5490aa26] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0317 12:42:56.043748  629808 system_pods.go:89] "snapshot-controller-68b874b76f-9q74h" [a32b085e-c793-429b-9098-8df28c689c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0317 12:42:56.043751  629808 system_pods.go:89] "storage-provisioner" [c7f99af7-bb01-4504-ac70-77dc8ab04b3e] Running
	I0317 12:42:56.043759  629808 system_pods.go:126] duration metric: took 213.743605ms to wait for k8s-apps to be running ...
	I0317 12:42:56.043771  629808 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 12:42:56.043818  629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:42:56.057578  629808 system_svc.go:56] duration metric: took 13.79821ms WaitForService to wait for kubelet
	I0317 12:42:56.057607  629808 kubeadm.go:582] duration metric: took 36.230706889s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:42:56.057628  629808 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:42:56.062024  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:56.176645  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:56.176824  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:56.229839  629808 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 12:42:56.229878  629808 node_conditions.go:123] node cpu capacity is 2
	I0317 12:42:56.229897  629808 node_conditions.go:105] duration metric: took 172.262786ms to run NodePressure ...
	I0317 12:42:56.229914  629808 start.go:241] waiting for startup goroutines ...
	I0317 12:42:56.438910  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:56.562508  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:56.676230  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:56.676320  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:56.940012  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:57.061619  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:57.175907  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:57.175914  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:57.439014  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:57.561593  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:57.676244  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:57.676248  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:57.939781  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:58.062264  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:58.174978  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:58.176486  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:58.439452  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:58.563294  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:58.674767  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:58.676491  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:58.939985  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:59.061728  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:59.175581  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:59.176202  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:59.439139  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:42:59.561775  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:59.676897  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:42:59.676966  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:42:59.938738  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:00.061828  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:00.176279  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:43:00.176509  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:00.439982  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:00.562510  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:00.675153  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:43:00.675373  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:00.941154  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:01.062240  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:01.175203  629808 kapi.go:107] duration metric: took 34.003090918s to wait for kubernetes.io/minikube-addons=registry ...
	I0317 12:43:01.176489  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:01.439722  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:01.562728  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:01.677533  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:01.940150  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:02.061882  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:02.175722  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:02.439056  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:02.563380  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:02.676353  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:02.941926  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:03.063082  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:03.176888  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:03.439973  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:03.561493  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:03.676323  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:03.946628  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:04.067290  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:04.177103  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:04.440068  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:04.561692  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:04.677115  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:04.939043  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:05.062051  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:05.176129  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:05.438972  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:05.561443  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:05.676802  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:05.939511  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:06.068338  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:06.176081  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:06.439696  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:06.562779  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:06.676945  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:06.938982  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:07.061620  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:07.178671  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:07.439145  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:07.561670  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:07.676664  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:07.941374  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:08.062367  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:08.176032  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:08.438905  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:08.562267  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:08.676461  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:08.942134  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:09.062154  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:09.176280  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:09.439358  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:09.562005  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:09.686123  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:09.938651  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:10.062425  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:10.176445  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:10.439544  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:10.563624  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:10.676565  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:10.939876  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:11.061302  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:11.176393  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:11.439962  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:11.561779  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:11.676627  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:11.939823  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:12.062978  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:12.175763  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:12.438615  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:12.561953  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:12.675631  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:12.940007  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:13.061911  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:13.176330  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:13.442420  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:13.561312  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:13.676156  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:13.940588  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:14.062184  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:14.176082  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:14.474140  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:14.562780  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:14.676734  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:14.938963  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:15.063222  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:15.177412  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:15.439653  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:15.562450  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:15.680006  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:15.938519  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:16.062237  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:16.176354  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:16.439392  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:16.562004  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:16.677517  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:16.939438  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:17.061683  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:17.175900  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:17.438932  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:17.562028  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:17.678255  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:17.946145  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:18.062239  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:18.176537  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:18.447757  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:18.562943  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:18.682327  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:18.940584  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:19.062392  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:19.179940  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:19.439580  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:19.562303  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:19.676387  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:19.941886  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:20.134778  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:20.177786  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:20.440918  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:20.563067  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:20.676935  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:20.939933  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:21.062106  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:21.184568  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:21.440225  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:21.562623  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:21.677019  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:21.939438  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:22.062129  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:22.180344  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:22.439128  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:22.561307  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:22.675935  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:22.941259  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:23.061959  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:23.176033  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:23.696429  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:23.696602  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:23.696809  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:23.940937  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:24.061737  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:24.176675  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:24.439759  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:24.562432  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:24.676494  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:24.939205  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:25.062547  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:25.176786  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:25.439248  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:25.562636  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:25.677108  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:25.940074  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:26.061776  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:26.176692  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:26.440251  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:26.563675  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:26.676954  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:26.939684  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:27.062084  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:27.176204  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:27.439178  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:27.561800  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:27.676859  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:27.939582  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:28.063379  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:28.176732  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:28.438679  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:28.561952  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:28.681125  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:28.939970  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:29.061454  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:29.176286  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:29.439957  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:29.562490  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:29.676226  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:29.939097  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:30.061971  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:30.181859  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:30.438649  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:30.562346  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:30.676435  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:30.943077  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:31.061500  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:31.176532  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:31.439506  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:31.562394  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:31.676174  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:31.939759  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:32.062122  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:32.178470  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:32.439625  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:43:32.562976  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:32.676450  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:32.940100  629808 kapi.go:107] duration metric: took 1m4.504416534s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0317 12:43:33.061618  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:33.176485  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:33.562256  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:33.677098  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:34.062325  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:34.176823  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:34.562284  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:34.676476  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:35.062542  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:35.176364  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:35.562188  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:35.676402  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:36.062530  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:36.176495  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:36.562344  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:36.676487  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:37.062616  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:37.353440  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:37.562281  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:37.676110  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:38.062913  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:38.175497  629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:43:38.562763  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:38.677016  629808 kapi.go:107] duration metric: took 1m11.504242894s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0317 12:43:39.062130  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:39.561941  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:40.062866  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:40.562682  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:41.062695  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:41.562016  629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:43:42.062827  629808 kapi.go:107] duration metric: took 1m11.503992842s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0317 12:43:42.064674  629808 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012915 cluster.
	I0317 12:43:42.066050  629808 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0317 12:43:42.067355  629808 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0317 12:43:42.068758  629808 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0317 12:43:42.070069  629808 addons.go:514] duration metric: took 1m22.243173543s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0317 12:43:42.070118  629808 start.go:246] waiting for cluster config update ...
	I0317 12:43:42.070143  629808 start.go:255] writing updated cluster config ...
	I0317 12:43:42.070414  629808 ssh_runner.go:195] Run: rm -f paused
	I0317 12:43:42.122322  629808 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 12:43:42.124033  629808 out.go:177] * Done! kubectl is now configured to use "addons-012915" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.362144411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d0cfa5c-b107-4eda-887b-5ad0da876a56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.363313357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215618363276749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d0cfa5c-b107-4eda-887b-5ad0da876a56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.363998987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.364119038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.364452816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.368541068Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.395523128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91f0a84f-4c4e-4812-a12e-81129b8abc78 name=/runtime.v1.RuntimeService/Version
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.395607042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91f0a84f-4c4e-4812-a12e-81129b8abc78 name=/runtime.v1.RuntimeService/Version
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.396709685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed213f12-fdd4-4ca4-b150-6a0450a1b009 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.397832759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215618397805395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed213f12-fdd4-4ca4-b150-6a0450a1b009 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398485895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398542393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398816277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.412710682Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7a1f9c4b-c23e-405f-b3d5-bcd661f52920 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.413005395Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&PodSandboxMetadata{Name:nginx,Uid:8de7ed01-2923-4e6d-8d79-73b590e77823,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1742215477493872258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:44:37.180495347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&PodSandboxMetadata{Name:busybox,Uid:86b0f221-352a-43ab-8627-f3bd097570e7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215424693591062,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:43:44.385756605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec2562250167061bb61a8
9c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-xdmt9,Uid:015e05a6-3f15-4b89-be12-3508da6ca614,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215411294280128,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.084920619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-66l7h,Uid:0dea4d95-0523-4d5b-84cb-9adc16a15c3b,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1742215347458948652,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 9aad88ca-42bc-461e-b0db-6a3f4c169332,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 9aad88ca-42bc-461e-b0db-6a3f4c169332,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.149942171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-hq84q,Uid:76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1742215347407850820,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 48dd1ef2-5a6e-4081-92fb-51cd7e6a831a,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 48dd1ef2-5a6e-4081-92fb-51cd7e6a831a,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.094854704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c7f99af7-bb01-4504-ac70-77dc8ab04b3e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215346230711057,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-03-17T12:42:25.594747161Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:7fc15951-f543-49f2-aa75-cfec8ee9f60a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215344010360936,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-03-17T12:42:23.400117380Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-5pkbv,Uid:8713b029-97c0-4a95-a703-886b238a1cf1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215341710519445,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b02
9-97c0-4a95-a703-886b238a1cf1,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:21.397098870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-z7dq4,Uid:0a5b10dc-42b3-4a25-9f03-222b3324baf9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215340653695598,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:20.344782884Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&PodSandboxM
etadata{Name:kube-proxy-gfpml,Uid:c443023b-cd1a-4c68-95ea-21e945f88e15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215340584555506,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:20.248837650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-012915,Uid:4d0e1e934e5a9e0fca80e2ef7acaa680,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330255923907,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.84:8443,kubernetes.io/config.hash: 4d0e1e934e5a9e0fca80e2ef7acaa680,kubernetes.io/config.seen: 2025-03-17T12:42:09.578681648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-012915,Uid:33eed52998b2ef38f7221edf88370515,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330241210865,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 33eed52998b2ef38f7221edf88370515,kubernetes.io/config
.seen: 2025-03-17T12:42:09.578682800Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&PodSandboxMetadata{Name:etcd-addons-012915,Uid:653939de3e18bd530b1f5fa403ec7f64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330240731421,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.84:2379,kubernetes.io/config.hash: 653939de3e18bd530b1f5fa403ec7f64,kubernetes.io/config.seen: 2025-03-17T12:42:09.578680165Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-012915,Uid:90b5efd14a8f4c6
9122921d67caed5e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330239597446,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90b5efd14a8f4c69122921d67caed5e4,kubernetes.io/config.seen: 2025-03-17T12:42:09.578676051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7a1f9c4b-c23e-405f-b3d5-bcd661f52920 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414389239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414443780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414720131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.415828110Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,},},}" file="otel-collector/interceptors.go:62" id=25c28bfd-2be4-4a01-a8e5-4e5f7308e372 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.415909100Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=25c28bfd-2be4-4a01-a8e5-4e5f7308e372 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416578030Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Verbose:false,}" file="otel-collector/interceptors.go:62" id=114e7269-3987-4d7b-9e20-2618acd736be name=/runtime.v1.RuntimeService/PodSandboxStatus
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416668184Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=114e7269-3987-4d7b-9e20-2618acd736be name=/runtime.v1.RuntimeService/PodSandboxStatus
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416973924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,},},}" file="otel-collector/interceptors.go:62" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.417014110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.417048335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd66a51459e1b       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   c4de7d0c7eebb       nginx
	ff1e96cb1f9da       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   44e0a2a5eb268       busybox
	c523da30019c3       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   ec25622501670       ingress-nginx-controller-56d7c84fd4-xdmt9
	efdb31bc8b6c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   db1849458e66b       ingress-nginx-admission-patch-66l7h
	133895556e47e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   80849abcc2545       ingress-nginx-admission-create-hq84q
	40a360856a0a6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   7b3ad027614e6       amd-gpu-device-plugin-5pkbv
	cce580779eef5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   aeda29e4d26e1       kube-ingress-dns-minikube
	8be226d7c0346       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   7ca8740fcb29f       storage-provisioner
	7a181ba896ebb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   ea1483bc05de6       coredns-668d6bf9bc-z7dq4
	5d9804125479f       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   d90f1092da620       kube-proxy-gfpml
	1d73a8dd6794a       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   aecb92d28c768       kube-scheduler-addons-012915
	4a2bc102a5d64       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   a86fb64e69f35       kube-controller-manager-addons-012915
	c159463a3ceb1       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   27f65aa74ce37       kube-apiserver-addons-012915
	e60f82ce86901       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   3adc1ca1797b2       etcd-addons-012915
	
	
	==> coredns [7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa] <==
	[INFO] 10.244.0.8:49913 - 3827 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000102878s
	[INFO] 10.244.0.8:49913 - 6117 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000098609s
	[INFO] 10.244.0.8:49913 - 28095 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000073695s
	[INFO] 10.244.0.8:49913 - 55591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097s
	[INFO] 10.244.0.8:49913 - 50510 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000065959s
	[INFO] 10.244.0.8:49913 - 21083 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088645s
	[INFO] 10.244.0.8:49913 - 12222 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000077001s
	[INFO] 10.244.0.8:48491 - 32076 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131036s
	[INFO] 10.244.0.8:48491 - 31797 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012079s
	[INFO] 10.244.0.8:49200 - 45771 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121235s
	[INFO] 10.244.0.8:49200 - 45479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007323s
	[INFO] 10.244.0.8:54512 - 37816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108277s
	[INFO] 10.244.0.8:54512 - 37552 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130997s
	[INFO] 10.244.0.8:38665 - 16125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101873s
	[INFO] 10.244.0.8:38665 - 15671 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075688s
	[INFO] 10.244.0.23:50713 - 16053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000304035s
	[INFO] 10.244.0.23:52398 - 18250 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000114522s
	[INFO] 10.244.0.23:49369 - 29459 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091164s
	[INFO] 10.244.0.23:46512 - 1811 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135319s
	[INFO] 10.244.0.23:59638 - 27790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066899s
	[INFO] 10.244.0.23:59548 - 31923 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082871s
	[INFO] 10.244.0.23:36786 - 25199 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.002887193s
	[INFO] 10.244.0.23:40623 - 14426 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004900596s
	[INFO] 10.244.0.27:42367 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241795s
	[INFO] 10.244.0.27:34940 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110771s
	
	
	==> describe nodes <==
	Name:               addons-012915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-012915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=addons-012915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_42_16_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-012915
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:42:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-012915
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:46:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:44:49 +0000   Mon, 17 Mar 2025 12:42:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:44:49 +0000   Mon, 17 Mar 2025 12:42:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:44:49 +0000   Mon, 17 Mar 2025 12:42:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:44:49 +0000   Mon, 17 Mar 2025 12:42:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    addons-012915
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9b967ac9504025a9ee76c4341b2f70
	  System UUID:                fa9b967a-c950-4025-a9ee-76c4341b2f70
	  Boot ID:                    06cacc75-df46-43ee-b88f-6a36992d8647
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     hello-world-app-7d9564db4-8dhl4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-xdmt9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m31s
	  kube-system                 amd-gpu-device-plugin-5pkbv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 coredns-668d6bf9bc-z7dq4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m38s
	  kube-system                 etcd-addons-012915                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m43s
	  kube-system                 kube-apiserver-addons-012915                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-addons-012915        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-gfpml                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-scheduler-addons-012915                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m36s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-012915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-012915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-012915 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s  kubelet          Node addons-012915 status is now: NodeReady
	  Normal  RegisteredNode           4m39s  node-controller  Node addons-012915 event: Registered Node addons-012915 in Controller
	
	
	==> dmesg <==
	[  +4.074818] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.063473] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.969885] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.075320] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.749538] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.690834] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.028787] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.089939] kauditd_printk_skb: 159 callbacks suppressed
	[ +12.866037] kauditd_printk_skb: 31 callbacks suppressed
	[Mar17 12:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.771731] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.655980] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.486780] kauditd_printk_skb: 36 callbacks suppressed
	[  +8.872962] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.030974] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.270633] kauditd_printk_skb: 9 callbacks suppressed
	[Mar17 12:44] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.028036] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.114748] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.840044] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.986434] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.230236] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.533480] kauditd_printk_skb: 29 callbacks suppressed
	[ +15.686562] kauditd_printk_skb: 9 callbacks suppressed
	[Mar17 12:46] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56ced83465e0f] <==
	{"level":"warn","ts":"2025-03-17T12:43:23.671708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.22929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:43:23.671751Z","caller":"traceutil/trace.go:171","msg":"trace[977862219] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1004; }","duration":"120.289304ms","start":"2025-03-17T12:43:23.551450Z","end":"2025-03-17T12:43:23.671739Z","steps":["trace[977862219] 'agreement among raft nodes before linearized reading'  (duration: 120.231025ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:43:23.671877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.327794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-03-17T12:43:23.671914Z","caller":"traceutil/trace.go:171","msg":"trace[1519861164] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1004; }","duration":"166.378813ms","start":"2025-03-17T12:43:23.505524Z","end":"2025-03-17T12:43:23.671902Z","steps":["trace[1519861164] 'agreement among raft nodes before linearized reading'  (duration: 166.303303ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:43:37.337560Z","caller":"traceutil/trace.go:171","msg":"trace[103932370] linearizableReadLoop","detail":"{readStateIndex:1113; appliedIndex:1112; }","duration":"172.5411ms","start":"2025-03-17T12:43:37.165003Z","end":"2025-03-17T12:43:37.337544Z","steps":["trace[103932370] 'read index received'  (duration: 171.540007ms)","trace[103932370] 'applied index is now lower than readState.Index'  (duration: 1.000664ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T12:43:37.337738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.711672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:43:37.337807Z","caller":"traceutil/trace.go:171","msg":"trace[2048962122] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"172.811552ms","start":"2025-03-17T12:43:37.164986Z","end":"2025-03-17T12:43:37.337798Z","steps":["trace[2048962122] 'agreement among raft nodes before linearized reading'  (duration: 172.671788ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:43:37.338582Z","caller":"traceutil/trace.go:171","msg":"trace[749594463] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"280.791133ms","start":"2025-03-17T12:43:37.057737Z","end":"2025-03-17T12:43:37.338528Z","steps":["trace[749594463] 'process raft request'  (duration: 278.850485ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:44:04.231386Z","caller":"traceutil/trace.go:171","msg":"trace[728916456] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1213; }","duration":"304.101976ms","start":"2025-03-17T12:44:03.927193Z","end":"2025-03-17T12:44:04.231295Z","steps":["trace[728916456] 'process raft request'  (duration: 303.979351ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:44:04.231561Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:03.927183Z","time spent":"304.26024ms","remote":"127.0.0.1:59508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":70,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" mod_revision:815 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" > >"}
	{"level":"info","ts":"2025-03-17T12:44:04.236906Z","caller":"traceutil/trace.go:171","msg":"trace[1732004958] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1247; }","duration":"171.164353ms","start":"2025-03-17T12:44:04.065731Z","end":"2025-03-17T12:44:04.236896Z","steps":["trace[1732004958] 'read index received'  (duration: 165.939649ms)","trace[1732004958] 'applied index is now lower than readState.Index'  (duration: 5.224337ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:44:04.237010Z","caller":"traceutil/trace.go:171","msg":"trace[1184029412] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"308.780397ms","start":"2025-03-17T12:44:03.928224Z","end":"2025-03-17T12:44:04.237004Z","steps":["trace[1184029412] 'process raft request'  (duration: 308.595788ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:44:04.237079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:03.928214Z","time spent":"308.825144ms","remote":"127.0.0.1:59130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1207 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-03-17T12:44:04.237249Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.470371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-03-17T12:44:04.237299Z","caller":"traceutil/trace.go:171","msg":"trace[1900314030] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1214; }","duration":"171.583605ms","start":"2025-03-17T12:44:04.065707Z","end":"2025-03-17T12:44:04.237291Z","steps":["trace[1900314030] 'agreement among raft nodes before linearized reading'  (duration: 171.478792ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:44:29.060435Z","caller":"traceutil/trace.go:171","msg":"trace[1879821724] linearizableReadLoop","detail":"{readStateIndex:1521; appliedIndex:1520; }","duration":"166.096471ms","start":"2025-03-17T12:44:28.894323Z","end":"2025-03-17T12:44:29.060419Z","steps":["trace[1879821724] 'read index received'  (duration: 165.93886ms)","trace[1879821724] 'applied index is now lower than readState.Index'  (duration: 157.163µs)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:44:29.060522Z","caller":"traceutil/trace.go:171","msg":"trace[756830991] transaction","detail":"{read_only:false; response_revision:1480; number_of_response:1; }","duration":"327.951814ms","start":"2025-03-17T12:44:28.732563Z","end":"2025-03-17T12:44:29.060515Z","steps":["trace[756830991] 'process raft request'  (duration: 327.728886ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:44:29.060598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:28.732546Z","time spent":"327.993058ms","remote":"127.0.0.1:59242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1454 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-03-17T12:44:29.060630Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.836541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:44:29.060674Z","caller":"traceutil/trace.go:171","msg":"trace[1856043071] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1480; }","duration":"154.89849ms","start":"2025-03-17T12:44:28.905762Z","end":"2025-03-17T12:44:29.060661Z","steps":["trace[1856043071] 'agreement among raft nodes before linearized reading'  (duration: 154.837496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:44:29.060821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.803134ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-03-17T12:44:29.060827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.501123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-03-17T12:44:29.060841Z","caller":"traceutil/trace.go:171","msg":"trace[1756805791] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1480; }","duration":"142.824169ms","start":"2025-03-17T12:44:28.918010Z","end":"2025-03-17T12:44:29.060834Z","steps":["trace[1756805791] 'agreement among raft nodes before linearized reading'  (duration: 142.794282ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:44:29.060844Z","caller":"traceutil/trace.go:171","msg":"trace[1817373682] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1480; }","duration":"166.538356ms","start":"2025-03-17T12:44:28.894300Z","end":"2025-03-17T12:44:29.060839Z","steps":["trace[1817373682] 'agreement among raft nodes before linearized reading'  (duration: 166.471494ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:44:32.380178Z","caller":"traceutil/trace.go:171","msg":"trace[763296153] transaction","detail":"{read_only:false; response_revision:1511; number_of_response:1; }","duration":"171.594633ms","start":"2025-03-17T12:44:32.208567Z","end":"2025-03-17T12:44:32.380161Z","steps":["trace[763296153] 'process raft request'  (duration: 171.478484ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:46:58 up 5 min,  0 users,  load average: 0.50, 1.19, 0.64
	Linux addons-012915 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c159463a3ceb104234df8916a5f45b4790ff840cb429031de6dac13d8d11ecf8] <==
	I0317 12:43:03.973063       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0317 12:43:55.552768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:58516: use of closed network connection
	E0317 12:43:55.722174       1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:58540: use of closed network connection
	I0317 12:44:17.827179       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.19.214"}
	I0317 12:44:29.826897       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0317 12:44:30.944384       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0317 12:44:33.236074       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0317 12:44:37.031022       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0317 12:44:37.221083       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.249.155"}
	I0317 12:44:40.501198       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0317 12:44:59.703259       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:59.703379       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:59.727662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:59.728025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:59.780571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:59.780706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:59.838569       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:59.838619       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:59.846044       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:59.846084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0317 12:45:00.839309       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0317 12:45:00.846498       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0317 12:45:00.941103       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0317 12:45:04.943751       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0317 12:46:57.308248       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.196.201"}
	
	
	==> kube-controller-manager [4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaae677a0e5bbbdf293d0] <==
	E0317 12:45:55.285844       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:45:59.641058       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:45:59.641984       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0317 12:45:59.642857       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:45:59.642894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:15.609547       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:15.610531       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0317 12:46:15.611642       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:15.611722       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:23.043373       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:23.044392       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0317 12:46:23.045180       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:23.045208       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:32.361507       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:32.362446       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0317 12:46:32.363155       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:32.363210       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:55.507190       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:55.507972       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0317 12:46:55.509475       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:55.509516       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0317 12:46:57.118701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="29.665451ms"
	I0317 12:46:57.130868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.083665ms"
	I0317 12:46:57.131225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.035µs"
	I0317 12:46:57.140393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="68.481µs"
	
	
	==> kube-proxy [5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 12:42:21.964854       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 12:42:21.998666       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.84"]
	E0317 12:42:21.998745       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:42:22.087441       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 12:42:22.087511       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 12:42:22.087534       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:42:22.090085       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:42:22.090377       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:42:22.090400       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:42:22.094843       1 config.go:199] "Starting service config controller"
	I0317 12:42:22.094878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:42:22.094906       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:42:22.094910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:42:22.098376       1 config.go:329] "Starting node config controller"
	I0317 12:42:22.098398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:42:22.196848       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:42:22.196899       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:42:22.198464       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1d73a8dd6794a66be96fa55340cfef9c096d211461a6597900775ee7fb31061d] <==
	W0317 12:42:12.724943       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 12:42:12.724967       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:12.725000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:42:12.725040       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:12.725056       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:42:12.725076       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:12.725255       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:42:12.725313       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 12:42:13.531058       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 12:42:13.531153       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.543635       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:42:13.543716       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.582301       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:42:13.582525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.604241       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:42:13.604404       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.664692       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 12:42:13.664813       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.794966       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 12:42:13.795131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.826935       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 12:42:13.826975       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:42:13.871365       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:42:13.871474       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 12:42:14.317805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 12:46:15 addons-012915 kubelet[1227]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:46:15 addons-012915 kubelet[1227]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 12:46:15 addons-012915 kubelet[1227]: E0317 12:46:15.913970    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215575913649049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:15 addons-012915 kubelet[1227]: E0317 12:46:15.914136    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215575913649049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:25 addons-012915 kubelet[1227]: E0317 12:46:25.916775    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215585916440678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:25 addons-012915 kubelet[1227]: E0317 12:46:25.917383    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215585916440678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:35 addons-012915 kubelet[1227]: E0317 12:46:35.919643    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215595919213400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:35 addons-012915 kubelet[1227]: E0317 12:46:35.919682    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215595919213400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:41 addons-012915 kubelet[1227]: I0317 12:46:41.387955    1227 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5pkbv" secret="" err="secret \"gcp-auth\" not found"
	Mar 17 12:46:45 addons-012915 kubelet[1227]: E0317 12:46:45.921646    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215605921301024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:45 addons-012915 kubelet[1227]: E0317 12:46:45.921919    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215605921301024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:55 addons-012915 kubelet[1227]: E0317 12:46:55.924634    1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215615924311190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:55 addons-012915 kubelet[1227]: E0317 12:46:55.924970    1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215615924311190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110483    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="5eeb38fe-0c9f-4504-9741-270bd6332865" containerName="task-pv-container"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110569    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="10d4e550-08b3-4311-afc0-fbaa5490aa26" containerName="volume-snapshot-controller"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110579    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="592324f8-e091-4a7a-a486-93d54a56c0f1" containerName="csi-attacher"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110585    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="node-driver-registrar"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110592    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="hostpath"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110597    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="liveness-probe"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110602    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-external-health-monitor-controller"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110608    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-provisioner"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110614    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="610b15f2-2636-4b8f-9363-d7c6eca55342" containerName="csi-resizer"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110620    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-snapshotter"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110626    1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="a32b085e-c793-429b-9098-8df28c689c6d" containerName="volume-snapshot-controller"
	Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.305410    1227 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvbh\" (UniqueName: \"kubernetes.io/projected/f8a61344-cc0c-45b7-b157-e7662713cb83-kube-api-access-kcvbh\") pod \"hello-world-app-7d9564db4-8dhl4\" (UID: \"f8a61344-cc0c-45b7-b157-e7662713cb83\") " pod="default/hello-world-app-7d9564db4-8dhl4"
	
	
	==> storage-provisioner [8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b] <==
	I0317 12:42:26.773425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0317 12:42:26.846465       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0317 12:42:26.846531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0317 12:42:26.860041       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0317 12:42:26.860559       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7!
	I0317 12:42:26.861310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"727e4127-f891-4f4c-960c-bbe037bbdce9", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7 became leader
	I0317 12:42:26.977150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012915 -n addons-012915
helpers_test.go:261: (dbg) Run:  kubectl --context addons-012915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h: exit status 1 (63.996372ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-8dhl4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-012915/192.168.39.84
	Start Time:       Mon, 17 Mar 2025 12:46:57 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kcvbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kcvbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-8dhl4 to addons-012915
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hq84q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-66l7h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable ingress-dns --alsologtostderr -v=1: (1.041171122s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable ingress --alsologtostderr -v=1: (7.68948455s)
--- FAIL: TestAddons/parallel/Ingress (151.44s)

                                                
                                    
x
+
TestPreload (175.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-939223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-939223 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.316479615s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-939223 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-939223 image pull gcr.io/k8s-minikube/busybox: (3.822765425s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-939223
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-939223: (6.601791583s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-939223 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0317 13:37:12.573818  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-939223 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.445127866s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-939223 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-03-17 13:37:35.776204362 +0000 UTC m=+3375.262001081
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-939223 -n test-preload-939223
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-939223 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-939223 logs -n 25: (1.027158235s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-913463 ssh -n                                                                 | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	|         | multinode-913463-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-913463 ssh -n multinode-913463 sudo cat                                       | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	|         | /home/docker/cp-test_multinode-913463-m03_multinode-913463.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-913463 cp multinode-913463-m03:/home/docker/cp-test.txt                       | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	|         | multinode-913463-m02:/home/docker/cp-test_multinode-913463-m03_multinode-913463-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-913463 ssh -n                                                                 | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	|         | multinode-913463-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-913463 ssh -n multinode-913463-m02 sudo cat                                   | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	|         | /home/docker/cp-test_multinode-913463-m03_multinode-913463-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-913463 node stop m03                                                          | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:21 UTC |
	| node    | multinode-913463 node start                                                             | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:21 UTC | 17 Mar 25 13:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-913463                                                                | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:23 UTC |                     |
	| stop    | -p multinode-913463                                                                     | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:23 UTC | 17 Mar 25 13:26 UTC |
	| start   | -p multinode-913463                                                                     | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:26 UTC | 17 Mar 25 13:29 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-913463                                                                | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:29 UTC |                     |
	| node    | multinode-913463 node delete                                                            | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:29 UTC | 17 Mar 25 13:29 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-913463 stop                                                                   | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:29 UTC | 17 Mar 25 13:32 UTC |
	| start   | -p multinode-913463                                                                     | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:32 UTC | 17 Mar 25 13:34 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-913463                                                                | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC |                     |
	| start   | -p multinode-913463-m02                                                                 | multinode-913463-m02 | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-913463-m03                                                                 | multinode-913463-m03 | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC | 17 Mar 25 13:34 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-913463                                                                 | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC |                     |
	| delete  | -p multinode-913463-m03                                                                 | multinode-913463-m03 | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC | 17 Mar 25 13:34 UTC |
	| delete  | -p multinode-913463                                                                     | multinode-913463     | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC | 17 Mar 25 13:34 UTC |
	| start   | -p test-preload-939223                                                                  | test-preload-939223  | jenkins | v1.35.0 | 17 Mar 25 13:34 UTC | 17 Mar 25 13:36 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-939223 image pull                                                          | test-preload-939223  | jenkins | v1.35.0 | 17 Mar 25 13:36 UTC | 17 Mar 25 13:36 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-939223                                                                  | test-preload-939223  | jenkins | v1.35.0 | 17 Mar 25 13:36 UTC | 17 Mar 25 13:36 UTC |
	| start   | -p test-preload-939223                                                                  | test-preload-939223  | jenkins | v1.35.0 | 17 Mar 25 13:36 UTC | 17 Mar 25 13:37 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-939223 image list                                                          | test-preload-939223  | jenkins | v1.35.0 | 17 Mar 25 13:37 UTC | 17 Mar 25 13:37 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:36:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:36:34.167365  660439 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:36:34.167633  660439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:36:34.167644  660439 out.go:358] Setting ErrFile to fd 2...
	I0317 13:36:34.167648  660439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:36:34.167826  660439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:36:34.168328  660439 out.go:352] Setting JSON to false
	I0317 13:36:34.169403  660439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11938,"bootTime":1742206656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:36:34.169508  660439 start.go:139] virtualization: kvm guest
	I0317 13:36:34.171780  660439 out.go:177] * [test-preload-939223] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:36:34.173283  660439 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:36:34.173317  660439 notify.go:220] Checking for updates...
	I0317 13:36:34.175698  660439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:36:34.176707  660439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:36:34.177722  660439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:36:34.178918  660439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:36:34.180206  660439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:36:34.181756  660439 config.go:182] Loaded profile config "test-preload-939223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0317 13:36:34.182130  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:36:34.182176  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:36:34.197339  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0317 13:36:34.197791  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:36:34.198238  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:36:34.198260  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:36:34.198671  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:36:34.198843  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:34.200494  660439 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0317 13:36:34.201713  660439 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:36:34.202006  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:36:34.202042  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:36:34.216557  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0317 13:36:34.217046  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:36:34.217529  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:36:34.217564  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:36:34.217948  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:36:34.218132  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:34.252521  660439 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 13:36:34.253761  660439 start.go:297] selected driver: kvm2
	I0317 13:36:34.253782  660439 start.go:901] validating driver "kvm2" against &{Name:test-preload-939223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-939223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:36:34.253907  660439 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:36:34.254976  660439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:36:34.255084  660439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:36:34.270295  660439 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:36:34.270640  660439 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:36:34.270672  660439 cni.go:84] Creating CNI manager for ""
	I0317 13:36:34.270710  660439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:36:34.270753  660439 start.go:340] cluster config:
	{Name:test-preload-939223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-939223 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:36:34.270839  660439 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:36:34.272409  660439 out.go:177] * Starting "test-preload-939223" primary control-plane node in "test-preload-939223" cluster
	I0317 13:36:34.273509  660439 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0317 13:36:34.304361  660439 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0317 13:36:34.304389  660439 cache.go:56] Caching tarball of preloaded images
	I0317 13:36:34.304522  660439 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0317 13:36:34.306178  660439 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0317 13:36:34.307299  660439 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0317 13:36:34.333795  660439 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0317 13:36:37.549262  660439 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0317 13:36:37.549372  660439 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0317 13:36:38.407765  660439 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0317 13:36:38.407896  660439 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/config.json ...
	I0317 13:36:38.408129  660439 start.go:360] acquireMachinesLock for test-preload-939223: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:36:38.408201  660439 start.go:364] duration metric: took 47.841µs to acquireMachinesLock for "test-preload-939223"
	I0317 13:36:38.408217  660439 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:36:38.408223  660439 fix.go:54] fixHost starting: 
	I0317 13:36:38.408484  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:36:38.408519  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:36:38.423442  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0317 13:36:38.423995  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:36:38.424469  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:36:38.424493  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:36:38.424821  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:36:38.424988  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:38.425127  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetState
	I0317 13:36:38.426625  660439 fix.go:112] recreateIfNeeded on test-preload-939223: state=Stopped err=<nil>
	I0317 13:36:38.426655  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	W0317 13:36:38.426792  660439 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:36:38.428598  660439 out.go:177] * Restarting existing kvm2 VM for "test-preload-939223" ...
	I0317 13:36:38.429637  660439 main.go:141] libmachine: (test-preload-939223) Calling .Start
	I0317 13:36:38.429813  660439 main.go:141] libmachine: (test-preload-939223) starting domain...
	I0317 13:36:38.429835  660439 main.go:141] libmachine: (test-preload-939223) ensuring networks are active...
	I0317 13:36:38.430593  660439 main.go:141] libmachine: (test-preload-939223) Ensuring network default is active
	I0317 13:36:38.430920  660439 main.go:141] libmachine: (test-preload-939223) Ensuring network mk-test-preload-939223 is active
	I0317 13:36:38.431301  660439 main.go:141] libmachine: (test-preload-939223) getting domain XML...
	I0317 13:36:38.432075  660439 main.go:141] libmachine: (test-preload-939223) creating domain...
	I0317 13:36:39.611408  660439 main.go:141] libmachine: (test-preload-939223) waiting for IP...
	I0317 13:36:39.612315  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:39.612740  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:39.612849  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:39.612749  660491 retry.go:31] will retry after 284.700057ms: waiting for domain to come up
	I0317 13:36:39.899369  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:39.899855  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:39.899885  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:39.899799  660491 retry.go:31] will retry after 267.82406ms: waiting for domain to come up
	I0317 13:36:40.169386  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:40.169837  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:40.169867  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:40.169808  660491 retry.go:31] will retry after 476.297056ms: waiting for domain to come up
	I0317 13:36:40.647377  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:40.647789  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:40.647861  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:40.647776  660491 retry.go:31] will retry after 415.911418ms: waiting for domain to come up
	I0317 13:36:41.065401  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:41.065791  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:41.065821  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:41.065751  660491 retry.go:31] will retry after 683.400854ms: waiting for domain to come up
	I0317 13:36:41.750465  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:41.750911  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:41.750940  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:41.750891  660491 retry.go:31] will retry after 892.289668ms: waiting for domain to come up
	I0317 13:36:42.644933  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:42.645310  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:42.645333  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:42.645272  660491 retry.go:31] will retry after 1.15869793s: waiting for domain to come up
	I0317 13:36:43.806062  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:43.806433  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:43.806493  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:43.806421  660491 retry.go:31] will retry after 1.096820841s: waiting for domain to come up
	I0317 13:36:44.904766  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:44.905153  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:44.905179  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:44.905125  660491 retry.go:31] will retry after 1.184245897s: waiting for domain to come up
	I0317 13:36:46.091459  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:46.091980  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:46.092012  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:46.091942  660491 retry.go:31] will retry after 2.153520917s: waiting for domain to come up
	I0317 13:36:48.248481  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:48.248853  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:48.248877  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:48.248821  660491 retry.go:31] will retry after 1.879300605s: waiting for domain to come up
	I0317 13:36:50.130160  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:50.130609  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:50.130638  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:50.130561  660491 retry.go:31] will retry after 3.417023443s: waiting for domain to come up
	I0317 13:36:53.549605  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:53.550027  660439 main.go:141] libmachine: (test-preload-939223) DBG | unable to find current IP address of domain test-preload-939223 in network mk-test-preload-939223
	I0317 13:36:53.550063  660439 main.go:141] libmachine: (test-preload-939223) DBG | I0317 13:36:53.549979  660491 retry.go:31] will retry after 3.923568683s: waiting for domain to come up
	I0317 13:36:57.478103  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.478500  660439 main.go:141] libmachine: (test-preload-939223) found domain IP: 192.168.39.2
	I0317 13:36:57.478528  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has current primary IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.478534  660439 main.go:141] libmachine: (test-preload-939223) reserving static IP address...
	I0317 13:36:57.478995  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "test-preload-939223", mac: "52:54:00:2b:64:db", ip: "192.168.39.2"} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.479028  660439 main.go:141] libmachine: (test-preload-939223) DBG | skip adding static IP to network mk-test-preload-939223 - found existing host DHCP lease matching {name: "test-preload-939223", mac: "52:54:00:2b:64:db", ip: "192.168.39.2"}
	I0317 13:36:57.479042  660439 main.go:141] libmachine: (test-preload-939223) reserved static IP address 192.168.39.2 for domain test-preload-939223
	I0317 13:36:57.479055  660439 main.go:141] libmachine: (test-preload-939223) waiting for SSH...
	I0317 13:36:57.479068  660439 main.go:141] libmachine: (test-preload-939223) DBG | Getting to WaitForSSH function...
	I0317 13:36:57.481301  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.481626  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.481655  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.481772  660439 main.go:141] libmachine: (test-preload-939223) DBG | Using SSH client type: external
	I0317 13:36:57.481793  660439 main.go:141] libmachine: (test-preload-939223) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa (-rw-------)
	I0317 13:36:57.481837  660439 main.go:141] libmachine: (test-preload-939223) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:36:57.481862  660439 main.go:141] libmachine: (test-preload-939223) DBG | About to run SSH command:
	I0317 13:36:57.481879  660439 main.go:141] libmachine: (test-preload-939223) DBG | exit 0
	I0317 13:36:57.603041  660439 main.go:141] libmachine: (test-preload-939223) DBG | SSH cmd err, output: <nil>: 
	I0317 13:36:57.603400  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetConfigRaw
	I0317 13:36:57.604019  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetIP
	I0317 13:36:57.606533  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.606900  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.606928  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.607164  660439 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/config.json ...
	I0317 13:36:57.607393  660439 machine.go:93] provisionDockerMachine start ...
	I0317 13:36:57.607416  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:57.607671  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:57.609782  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.610128  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.610150  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.610307  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:57.610502  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.610678  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.610831  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:57.611010  660439 main.go:141] libmachine: Using SSH client type: native
	I0317 13:36:57.611283  660439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0317 13:36:57.611293  660439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:36:57.711327  660439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 13:36:57.711359  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetMachineName
	I0317 13:36:57.711618  660439 buildroot.go:166] provisioning hostname "test-preload-939223"
	I0317 13:36:57.711649  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetMachineName
	I0317 13:36:57.711834  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:57.714660  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.715039  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.715064  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.715188  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:57.715377  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.715544  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.715716  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:57.715879  660439 main.go:141] libmachine: Using SSH client type: native
	I0317 13:36:57.716076  660439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0317 13:36:57.716088  660439 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-939223 && echo "test-preload-939223" | sudo tee /etc/hostname
	I0317 13:36:57.828010  660439 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-939223
	
	I0317 13:36:57.828043  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:57.831078  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.831445  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.831478  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.831679  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:57.831866  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.832008  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:57.832144  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:57.832313  660439 main.go:141] libmachine: Using SSH client type: native
	I0317 13:36:57.832562  660439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0317 13:36:57.832579  660439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-939223' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-939223/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-939223' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:36:57.939012  660439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:36:57.939043  660439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:36:57.939093  660439 buildroot.go:174] setting up certificates
	I0317 13:36:57.939106  660439 provision.go:84] configureAuth start
	I0317 13:36:57.939118  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetMachineName
	I0317 13:36:57.939439  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetIP
	I0317 13:36:57.941784  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.942049  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.942085  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.942284  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:57.944257  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.944577  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:57.944600  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:57.944744  660439 provision.go:143] copyHostCerts
	I0317 13:36:57.944808  660439 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:36:57.944834  660439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:36:57.944911  660439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:36:57.945064  660439 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:36:57.945078  660439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:36:57.945121  660439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:36:57.945198  660439 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:36:57.945207  660439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:36:57.945254  660439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:36:57.945325  660439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.test-preload-939223 san=[127.0.0.1 192.168.39.2 localhost minikube test-preload-939223]
	I0317 13:36:58.450529  660439 provision.go:177] copyRemoteCerts
	I0317 13:36:58.450589  660439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:36:58.450616  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:58.453010  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.453295  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:58.453324  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.453466  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:58.453693  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.453836  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:58.453980  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:36:58.532767  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:36:58.554401  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0317 13:36:58.575759  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:36:58.596775  660439 provision.go:87] duration metric: took 657.655218ms to configureAuth
	I0317 13:36:58.596801  660439 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:36:58.596987  660439 config.go:182] Loaded profile config "test-preload-939223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0317 13:36:58.597071  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:58.599557  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.599948  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:58.599982  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.600180  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:58.600386  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.600561  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.600717  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:58.600880  660439 main.go:141] libmachine: Using SSH client type: native
	I0317 13:36:58.601071  660439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0317 13:36:58.601085  660439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:36:58.805900  660439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:36:58.805928  660439 machine.go:96] duration metric: took 1.198520792s to provisionDockerMachine
	I0317 13:36:58.805944  660439 start.go:293] postStartSetup for "test-preload-939223" (driver="kvm2")
	I0317 13:36:58.805957  660439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:36:58.805981  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:58.806348  660439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:36:58.806387  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:58.809250  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.809642  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:58.809665  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.809801  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:58.809960  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.810076  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:58.810192  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:36:58.889611  660439 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:36:58.893471  660439 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:36:58.893512  660439 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:36:58.893593  660439 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:36:58.893666  660439 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:36:58.893752  660439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:36:58.902388  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:36:58.923989  660439 start.go:296] duration metric: took 118.030097ms for postStartSetup
	I0317 13:36:58.924039  660439 fix.go:56] duration metric: took 20.515809493s for fixHost
	I0317 13:36:58.924066  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:58.926597  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.926878  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:58.926910  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:58.927051  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:58.927255  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.927438  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:58.927548  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:58.927707  660439 main.go:141] libmachine: Using SSH client type: native
	I0317 13:36:58.927905  660439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0317 13:36:58.927916  660439 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:36:59.027987  660439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742218619.006249072
	
	I0317 13:36:59.028009  660439 fix.go:216] guest clock: 1742218619.006249072
	I0317 13:36:59.028017  660439 fix.go:229] Guest: 2025-03-17 13:36:59.006249072 +0000 UTC Remote: 2025-03-17 13:36:58.924044909 +0000 UTC m=+24.794403686 (delta=82.204163ms)
	I0317 13:36:59.028039  660439 fix.go:200] guest clock delta is within tolerance: 82.204163ms
	I0317 13:36:59.028044  660439 start.go:83] releasing machines lock for "test-preload-939223", held for 20.61983354s
	I0317 13:36:59.028063  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:59.028316  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetIP
	I0317 13:36:59.030791  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.031115  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:59.031146  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.031259  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:59.031769  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:59.031942  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:36:59.032043  660439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:36:59.032083  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:59.032115  660439 ssh_runner.go:195] Run: cat /version.json
	I0317 13:36:59.032144  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:36:59.034570  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.034802  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.034881  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:59.034911  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.035009  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:59.035172  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:59.035206  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:36:59.035226  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:36:59.035351  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:59.035359  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:36:59.035507  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:36:59.035565  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:36:59.035707  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:36:59.035825  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:36:59.107992  660439 ssh_runner.go:195] Run: systemctl --version
	I0317 13:36:59.129990  660439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:36:59.275919  660439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:36:59.281709  660439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:36:59.281775  660439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:36:59.296911  660439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:36:59.296942  660439 start.go:495] detecting cgroup driver to use...
	I0317 13:36:59.297003  660439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:36:59.311507  660439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:36:59.323867  660439 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:36:59.323921  660439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:36:59.335883  660439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:36:59.348125  660439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:36:59.459050  660439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:36:59.584034  660439 docker.go:233] disabling docker service ...
	I0317 13:36:59.584104  660439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:36:59.597486  660439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:36:59.609100  660439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:36:59.732903  660439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:36:59.834277  660439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:36:59.847198  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:36:59.863834  660439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0317 13:36:59.863902  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.873568  660439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:36:59.873628  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.883004  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.892192  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.901555  660439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:36:59.910939  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.920022  660439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.934894  660439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:36:59.944258  660439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:36:59.952736  660439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:36:59.952795  660439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:36:59.966217  660439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:36:59.975069  660439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:37:00.077517  660439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:37:00.161397  660439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:37:00.161466  660439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:37:00.165656  660439 start.go:563] Will wait 60s for crictl version
	I0317 13:37:00.165732  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:00.169156  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:37:00.203919  660439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:37:00.204001  660439 ssh_runner.go:195] Run: crio --version
	I0317 13:37:00.229917  660439 ssh_runner.go:195] Run: crio --version
	I0317 13:37:00.256870  660439 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0317 13:37:00.258075  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetIP
	I0317 13:37:00.260729  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:00.261079  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:37:00.261104  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:00.261327  660439 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 13:37:00.265276  660439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:37:00.277241  660439 kubeadm.go:883] updating cluster {Name:test-preload-939223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-939223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:37:00.277381  660439 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0317 13:37:00.277438  660439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:37:00.309908  660439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0317 13:37:00.309973  660439 ssh_runner.go:195] Run: which lz4
	I0317 13:37:00.313606  660439 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:37:00.317222  660439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:37:00.317246  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0317 13:37:01.664032  660439 crio.go:462] duration metric: took 1.35045529s to copy over tarball
	I0317 13:37:01.664103  660439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:37:03.968069  660439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303925513s)
	I0317 13:37:03.968119  660439 crio.go:469] duration metric: took 2.304056572s to extract the tarball
	I0317 13:37:03.968129  660439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:37:04.008453  660439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:37:04.052902  660439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0317 13:37:04.052933  660439 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 13:37:04.053026  660439 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.053053  660439 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0317 13:37:04.053058  660439 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.053026  660439 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:37:04.053089  660439 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.053107  660439 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.053054  660439 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.053148  660439 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.054724  660439 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.054746  660439 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.054750  660439 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0317 13:37:04.054724  660439 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.054823  660439 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.054853  660439 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.054853  660439 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.054864  660439 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:37:04.199941  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.200107  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0317 13:37:04.206407  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.209395  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.219560  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.221287  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.263165  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.325019  660439 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0317 13:37:04.325070  660439 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0317 13:37:04.325080  660439 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.325107  660439 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0317 13:37:04.325134  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.325155  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.367373  660439 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0317 13:37:04.367433  660439 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.367435  660439 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0317 13:37:04.367468  660439 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.367486  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.367513  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.371888  660439 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0317 13:37:04.371932  660439 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.371938  660439 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0317 13:37:04.371966  660439 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.371977  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.371986  660439 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0317 13:37:04.372001  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.372019  660439 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.372062  660439 ssh_runner.go:195] Run: which crictl
	I0317 13:37:04.372087  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.372114  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0317 13:37:04.375286  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.376027  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.431024  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0317 13:37:04.431074  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.431082  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.431177  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.431183  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.453295  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.518288  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.588751  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0317 13:37:04.588824  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.588837  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.588908  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.588979  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0317 13:37:04.589000  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0317 13:37:04.589038  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0317 13:37:04.731875  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0317 13:37:04.731930  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0317 13:37:04.731996  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0317 13:37:04.732030  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0317 13:37:04.732079  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0317 13:37:04.732118  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0317 13:37:04.732153  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0317 13:37:04.732181  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0317 13:37:04.732246  660439 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0317 13:37:04.732268  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0317 13:37:04.732327  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0317 13:37:04.798253  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0317 13:37:04.798340  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0317 13:37:04.798378  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0317 13:37:04.798426  660439 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0317 13:37:04.798455  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0317 13:37:04.798475  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0317 13:37:04.798488  660439 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0317 13:37:04.798491  660439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0317 13:37:04.798503  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0317 13:37:04.798514  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0317 13:37:04.798381  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0317 13:37:04.798356  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0317 13:37:05.847090  660439 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:37:08.053250  660439 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.254711919s)
	I0317 13:37:08.053277  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0317 13:37:08.053304  660439 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0317 13:37:08.053354  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0317 13:37:08.053398  660439 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.254887253s)
	I0317 13:37:08.053439  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0317 13:37:08.053351  660439 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (3.254872293s)
	I0317 13:37:08.053456  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0317 13:37:08.053474  660439 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.255078766s)
	I0317 13:37:08.053504  660439 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0317 13:37:08.053532  660439 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.206403288s)
	I0317 13:37:10.101037  660439 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.047650289s)
	I0317 13:37:10.101077  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0317 13:37:10.101105  660439 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0317 13:37:10.101157  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0317 13:37:10.842422  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0317 13:37:10.842476  660439 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0317 13:37:10.842527  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0317 13:37:10.990959  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0317 13:37:10.991013  660439 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0317 13:37:10.991065  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0317 13:37:11.434947  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0317 13:37:11.434995  660439 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0317 13:37:11.435045  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0317 13:37:11.780580  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0317 13:37:11.780627  660439 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0317 13:37:11.780688  660439 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0317 13:37:12.626884  660439 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0317 13:37:12.626933  660439 cache_images.go:123] Successfully loaded all cached images
	I0317 13:37:12.626938  660439 cache_images.go:92] duration metric: took 8.573994261s to LoadCachedImages
	I0317 13:37:12.626951  660439 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.24.4 crio true true} ...
	I0317 13:37:12.627052  660439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-939223 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-939223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:37:12.627120  660439 ssh_runner.go:195] Run: crio config
	I0317 13:37:12.674096  660439 cni.go:84] Creating CNI manager for ""
	I0317 13:37:12.674119  660439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:37:12.674132  660439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:37:12.674156  660439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-939223 NodeName:test-preload-939223 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:37:12.674324  660439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-939223"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:37:12.674402  660439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0317 13:37:12.683379  660439 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:37:12.683461  660439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:37:12.691997  660439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0317 13:37:12.707058  660439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:37:12.722065  660439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0317 13:37:12.737909  660439 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I0317 13:37:12.741539  660439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:37:12.752931  660439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:37:12.884530  660439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:37:12.901041  660439 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223 for IP: 192.168.39.2
	I0317 13:37:12.901069  660439 certs.go:194] generating shared ca certs ...
	I0317 13:37:12.901096  660439 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:37:12.901293  660439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:37:12.901351  660439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:37:12.901364  660439 certs.go:256] generating profile certs ...
	I0317 13:37:12.901470  660439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/client.key
	I0317 13:37:12.901570  660439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/apiserver.key.d83e127c
	I0317 13:37:12.901715  660439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/proxy-client.key
	I0317 13:37:12.901889  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:37:12.901935  660439 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:37:12.901954  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:37:12.902001  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:37:12.902032  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:37:12.902064  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:37:12.902124  660439 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:37:12.902956  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:37:12.928635  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:37:12.958001  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:37:12.989142  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:37:13.019054  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0317 13:37:13.045635  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:37:13.078171  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:37:13.106950  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:37:13.128460  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:37:13.149203  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:37:13.169909  660439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:37:13.190922  660439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:37:13.205779  660439 ssh_runner.go:195] Run: openssl version
	I0317 13:37:13.210969  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:37:13.220435  660439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:37:13.224569  660439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:37:13.224634  660439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:37:13.229704  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:37:13.239069  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:37:13.248479  660439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:37:13.252756  660439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:37:13.252795  660439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:37:13.257760  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:37:13.266964  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:37:13.276471  660439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:37:13.280451  660439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:37:13.280509  660439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:37:13.285606  660439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:37:13.294885  660439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:37:13.298926  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:37:13.304250  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:37:13.309441  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:37:13.314832  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:37:13.319959  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:37:13.324959  660439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:37:13.329973  660439 kubeadm.go:392] StartCluster: {Name:test-preload-939223 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
939223 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:37:13.330051  660439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:37:13.330087  660439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:37:13.363102  660439 cri.go:89] found id: ""
	I0317 13:37:13.363177  660439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:37:13.372261  660439 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:37:13.372279  660439 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:37:13.372329  660439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:37:13.380845  660439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:37:13.381231  660439 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-939223" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:37:13.381338  660439 kubeconfig.go:62] /home/jenkins/minikube-integration/20539-621978/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-939223" cluster setting kubeconfig missing "test-preload-939223" context setting]
	I0317 13:37:13.381608  660439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:37:13.382084  660439 kapi.go:59] client config for test-preload-939223: &rest.Config{Host:"https://192.168.39.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/client.crt", KeyFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/client.key", CAFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:37:13.382469  660439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:37:13.382486  660439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:37:13.382495  660439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:37:13.382500  660439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:37:13.382871  660439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:37:13.391052  660439 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.2
	I0317 13:37:13.391086  660439 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:37:13.391100  660439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0317 13:37:13.391228  660439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:37:13.421397  660439 cri.go:89] found id: ""
	I0317 13:37:13.421459  660439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:37:13.436321  660439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:37:13.444874  660439 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:37:13.444897  660439 kubeadm.go:157] found existing configuration files:
	
	I0317 13:37:13.444943  660439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:37:13.452985  660439 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:37:13.453030  660439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:37:13.461447  660439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:37:13.469221  660439 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:37:13.469264  660439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:37:13.477363  660439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:37:13.485207  660439 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:37:13.485259  660439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:37:13.493638  660439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:37:13.501493  660439 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:37:13.501559  660439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:37:13.509666  660439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:37:13.518085  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:13.602998  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:14.481190  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:14.732467  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:14.800364  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:14.893031  660439 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:37:14.893110  660439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:37:15.393878  660439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:37:15.893581  660439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:37:15.912477  660439 api_server.go:72] duration metric: took 1.019448278s to wait for apiserver process to appear ...
	I0317 13:37:15.912504  660439 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:37:15.912529  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:15.913087  660439 api_server.go:269] stopped: https://192.168.39.2:8443/healthz: Get "https://192.168.39.2:8443/healthz": dial tcp 192.168.39.2:8443: connect: connection refused
	I0317 13:37:16.412796  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:19.407005  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:37:19.407034  660439 api_server.go:103] status: https://192.168.39.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:37:19.407057  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:19.421513  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:37:19.421542  660439 api_server.go:103] status: https://192.168.39.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:37:19.421563  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:19.444115  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:37:19.444140  660439 api_server.go:103] status: https://192.168.39.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:37:19.912705  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:19.919694  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:37:19.919722  660439 api_server.go:103] status: https://192.168.39.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:37:20.413099  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:20.418171  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:37:20.418194  660439 api_server.go:103] status: https://192.168.39.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:37:20.912819  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:20.919649  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0317 13:37:20.929094  660439 api_server.go:141] control plane version: v1.24.4
	I0317 13:37:20.929122  660439 api_server.go:131] duration metric: took 5.016610505s to wait for apiserver health ...
	I0317 13:37:20.929132  660439 cni.go:84] Creating CNI manager for ""
	I0317 13:37:20.929138  660439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:37:20.930843  660439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:37:20.932024  660439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:37:20.947069  660439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:37:20.969457  660439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:37:20.974640  660439 system_pods.go:59] 7 kube-system pods found
	I0317 13:37:20.974669  660439 system_pods.go:61] "coredns-6d4b75cb6d-wrvgp" [4b626d8a-62f1-476a-a0ae-23607f6f7fd2] Running
	I0317 13:37:20.974680  660439 system_pods.go:61] "etcd-test-preload-939223" [eba62fa3-e4f8-460a-b366-78d7fad14558] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:37:20.974685  660439 system_pods.go:61] "kube-apiserver-test-preload-939223" [b34ce5bf-7b09-43b7-9d50-ddcad29f4157] Running
	I0317 13:37:20.974694  660439 system_pods.go:61] "kube-controller-manager-test-preload-939223" [5ef4f3b1-3534-4e86-865d-7e303a2084b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:37:20.974699  660439 system_pods.go:61] "kube-proxy-qsk5k" [68ffdc87-f7ac-4cbc-9e3e-e57e14601a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0317 13:37:20.974703  660439 system_pods.go:61] "kube-scheduler-test-preload-939223" [c5547b92-a18f-4539-a2be-73ea0b44dceb] Running
	I0317 13:37:20.974706  660439 system_pods.go:61] "storage-provisioner" [0d517b0a-2408-4f9d-9c81-17efc006d020] Running
	I0317 13:37:20.974714  660439 system_pods.go:74] duration metric: took 5.234079ms to wait for pod list to return data ...
	I0317 13:37:20.974720  660439 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:37:20.977658  660439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:37:20.977685  660439 node_conditions.go:123] node cpu capacity is 2
	I0317 13:37:20.977696  660439 node_conditions.go:105] duration metric: took 2.971893ms to run NodePressure ...
	I0317 13:37:20.977716  660439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:37:21.181609  660439 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0317 13:37:21.184741  660439 kubeadm.go:739] kubelet initialised
	I0317 13:37:21.184766  660439 kubeadm.go:740] duration metric: took 3.126807ms waiting for restarted kubelet to initialise ...
	I0317 13:37:21.184777  660439 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:37:21.187354  660439 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:21.190589  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.190614  660439 pod_ready.go:82] duration metric: took 3.238468ms for pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:21.190626  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.190635  660439 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:21.198416  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "etcd-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.198435  660439 pod_ready.go:82] duration metric: took 7.790257ms for pod "etcd-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:21.198442  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "etcd-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.198448  660439 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:21.203306  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "kube-apiserver-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.203328  660439 pod_ready.go:82] duration metric: took 4.871486ms for pod "kube-apiserver-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:21.203348  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "kube-apiserver-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.203360  660439 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:21.372571  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.372600  660439 pod_ready.go:82] duration metric: took 169.231755ms for pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:21.372613  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.372619  660439 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qsk5k" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:21.773012  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "kube-proxy-qsk5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.773040  660439 pod_ready.go:82] duration metric: took 400.412244ms for pod "kube-proxy-qsk5k" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:21.773049  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "kube-proxy-qsk5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:21.773060  660439 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:22.173415  660439 pod_ready.go:98] node "test-preload-939223" hosting pod "kube-scheduler-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:22.173442  660439 pod_ready.go:82] duration metric: took 400.375335ms for pod "kube-scheduler-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	E0317 13:37:22.173452  660439 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-939223" hosting pod "kube-scheduler-test-preload-939223" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:22.173459  660439 pod_ready.go:39] duration metric: took 988.670027ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:37:22.173479  660439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:37:22.184553  660439 ops.go:34] apiserver oom_adj: -16
	I0317 13:37:22.184573  660439 kubeadm.go:597] duration metric: took 8.812288288s to restartPrimaryControlPlane
	I0317 13:37:22.184581  660439 kubeadm.go:394] duration metric: took 8.854613197s to StartCluster
	I0317 13:37:22.184609  660439 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:37:22.184691  660439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:37:22.185373  660439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:37:22.185656  660439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:37:22.185710  660439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:37:22.185830  660439 addons.go:69] Setting storage-provisioner=true in profile "test-preload-939223"
	I0317 13:37:22.185854  660439 addons.go:238] Setting addon storage-provisioner=true in "test-preload-939223"
	I0317 13:37:22.185849  660439 addons.go:69] Setting default-storageclass=true in profile "test-preload-939223"
	W0317 13:37:22.185863  660439 addons.go:247] addon storage-provisioner should already be in state true
	I0317 13:37:22.185864  660439 config.go:182] Loaded profile config "test-preload-939223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0317 13:37:22.185876  660439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-939223"
	I0317 13:37:22.185897  660439 host.go:66] Checking if "test-preload-939223" exists ...
	I0317 13:37:22.186235  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:37:22.186289  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:37:22.186400  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:37:22.186447  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:37:22.188263  660439 out.go:177] * Verifying Kubernetes components...
	I0317 13:37:22.189434  660439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:37:22.201973  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0317 13:37:22.201973  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0317 13:37:22.202547  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:37:22.202675  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:37:22.203171  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:37:22.203191  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:37:22.203327  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:37:22.203352  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:37:22.203602  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:37:22.203733  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:37:22.203901  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetState
	I0317 13:37:22.204202  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:37:22.204251  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:37:22.206269  660439 kapi.go:59] client config for test-preload-939223: &rest.Config{Host:"https://192.168.39.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/client.crt", KeyFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/test-preload-939223/client.key", CAFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:37:22.206631  660439 addons.go:238] Setting addon default-storageclass=true in "test-preload-939223"
	W0317 13:37:22.206653  660439 addons.go:247] addon default-storageclass should already be in state true
	I0317 13:37:22.206680  660439 host.go:66] Checking if "test-preload-939223" exists ...
	I0317 13:37:22.207047  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:37:22.207091  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:37:22.219543  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0317 13:37:22.220023  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:37:22.220508  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:37:22.220538  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:37:22.220869  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:37:22.221066  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetState
	I0317 13:37:22.221524  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0317 13:37:22.221925  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:37:22.222335  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:37:22.222358  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:37:22.222752  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:37:22.222885  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:37:22.223228  660439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:37:22.223269  660439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:37:22.225085  660439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:37:22.226737  660439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:37:22.226756  660439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:37:22.226771  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:37:22.229325  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:22.229759  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:37:22.229786  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:22.229921  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:37:22.230076  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:37:22.230209  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:37:22.230315  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:37:22.262939  660439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0317 13:37:22.263363  660439 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:37:22.263831  660439 main.go:141] libmachine: Using API Version  1
	I0317 13:37:22.263855  660439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:37:22.264219  660439 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:37:22.264440  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetState
	I0317 13:37:22.265982  660439 main.go:141] libmachine: (test-preload-939223) Calling .DriverName
	I0317 13:37:22.266172  660439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:37:22.266187  660439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:37:22.266201  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHHostname
	I0317 13:37:22.268898  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:22.269405  660439 main.go:141] libmachine: (test-preload-939223) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:64:db", ip: ""} in network mk-test-preload-939223: {Iface:virbr1 ExpiryTime:2025-03-17 14:36:48 +0000 UTC Type:0 Mac:52:54:00:2b:64:db Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:test-preload-939223 Clientid:01:52:54:00:2b:64:db}
	I0317 13:37:22.269437  660439 main.go:141] libmachine: (test-preload-939223) DBG | domain test-preload-939223 has defined IP address 192.168.39.2 and MAC address 52:54:00:2b:64:db in network mk-test-preload-939223
	I0317 13:37:22.269593  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHPort
	I0317 13:37:22.269792  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHKeyPath
	I0317 13:37:22.269937  660439 main.go:141] libmachine: (test-preload-939223) Calling .GetSSHUsername
	I0317 13:37:22.270063  660439 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/test-preload-939223/id_rsa Username:docker}
	I0317 13:37:22.355362  660439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:37:22.371294  660439 node_ready.go:35] waiting up to 6m0s for node "test-preload-939223" to be "Ready" ...
	I0317 13:37:22.441867  660439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:37:22.454122  660439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:37:23.333337  660439 main.go:141] libmachine: Making call to close driver server
	I0317 13:37:23.333371  660439 main.go:141] libmachine: (test-preload-939223) Calling .Close
	I0317 13:37:23.333378  660439 main.go:141] libmachine: Making call to close driver server
	I0317 13:37:23.333389  660439 main.go:141] libmachine: (test-preload-939223) Calling .Close
	I0317 13:37:23.333699  660439 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:37:23.333722  660439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:37:23.333731  660439 main.go:141] libmachine: Making call to close driver server
	I0317 13:37:23.333739  660439 main.go:141] libmachine: (test-preload-939223) Calling .Close
	I0317 13:37:23.333704  660439 main.go:141] libmachine: (test-preload-939223) DBG | Closing plugin on server side
	I0317 13:37:23.333704  660439 main.go:141] libmachine: (test-preload-939223) DBG | Closing plugin on server side
	I0317 13:37:23.333799  660439 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:37:23.333811  660439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:37:23.333822  660439 main.go:141] libmachine: Making call to close driver server
	I0317 13:37:23.333830  660439 main.go:141] libmachine: (test-preload-939223) Calling .Close
	I0317 13:37:23.333977  660439 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:37:23.334004  660439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:37:23.334281  660439 main.go:141] libmachine: (test-preload-939223) DBG | Closing plugin on server side
	I0317 13:37:23.334336  660439 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:37:23.334355  660439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:37:23.339900  660439 main.go:141] libmachine: Making call to close driver server
	I0317 13:37:23.339925  660439 main.go:141] libmachine: (test-preload-939223) Calling .Close
	I0317 13:37:23.340207  660439 main.go:141] libmachine: (test-preload-939223) DBG | Closing plugin on server side
	I0317 13:37:23.340217  660439 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:37:23.340233  660439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:37:23.342264  660439 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 13:37:23.343577  660439 addons.go:514] duration metric: took 1.157874711s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 13:37:24.375774  660439 node_ready.go:53] node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:26.874217  660439 node_ready.go:53] node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:29.374252  660439 node_ready.go:53] node "test-preload-939223" has status "Ready":"False"
	I0317 13:37:29.874456  660439 node_ready.go:49] node "test-preload-939223" has status "Ready":"True"
	I0317 13:37:29.874487  660439 node_ready.go:38] duration metric: took 7.503125842s for node "test-preload-939223" to be "Ready" ...
	I0317 13:37:29.874497  660439 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:37:29.877698  660439 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:29.881085  660439 pod_ready.go:93] pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:29.881104  660439 pod_ready.go:82] duration metric: took 3.382479ms for pod "coredns-6d4b75cb6d-wrvgp" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:29.881112  660439 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:31.887431  660439 pod_ready.go:103] pod "etcd-test-preload-939223" in "kube-system" namespace has status "Ready":"False"
	I0317 13:37:33.887316  660439 pod_ready.go:93] pod "etcd-test-preload-939223" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:33.887346  660439 pod_ready.go:82] duration metric: took 4.006227227s for pod "etcd-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.887359  660439 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.890844  660439 pod_ready.go:93] pod "kube-apiserver-test-preload-939223" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:33.890865  660439 pod_ready.go:82] duration metric: took 3.497727ms for pod "kube-apiserver-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.890874  660439 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.894912  660439 pod_ready.go:93] pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:33.894937  660439 pod_ready.go:82] duration metric: took 4.054085ms for pod "kube-controller-manager-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.894953  660439 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsk5k" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.899028  660439 pod_ready.go:93] pod "kube-proxy-qsk5k" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:33.899048  660439 pod_ready.go:82] duration metric: took 4.084963ms for pod "kube-proxy-qsk5k" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:33.899060  660439 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:34.904425  660439 pod_ready.go:93] pod "kube-scheduler-test-preload-939223" in "kube-system" namespace has status "Ready":"True"
	I0317 13:37:34.904448  660439 pod_ready.go:82] duration metric: took 1.005378004s for pod "kube-scheduler-test-preload-939223" in "kube-system" namespace to be "Ready" ...
	I0317 13:37:34.904458  660439 pod_ready.go:39] duration metric: took 5.029950914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:37:34.904474  660439 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:37:34.904550  660439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:37:34.928610  660439 api_server.go:72] duration metric: took 12.742907181s to wait for apiserver process to appear ...
	I0317 13:37:34.928640  660439 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:37:34.928663  660439 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0317 13:37:34.933964  660439 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0317 13:37:34.935340  660439 api_server.go:141] control plane version: v1.24.4
	I0317 13:37:34.935368  660439 api_server.go:131] duration metric: took 6.719501ms to wait for apiserver health ...
	I0317 13:37:34.935378  660439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:37:34.939581  660439 system_pods.go:59] 7 kube-system pods found
	I0317 13:37:34.939614  660439 system_pods.go:61] "coredns-6d4b75cb6d-wrvgp" [4b626d8a-62f1-476a-a0ae-23607f6f7fd2] Running
	I0317 13:37:34.939622  660439 system_pods.go:61] "etcd-test-preload-939223" [eba62fa3-e4f8-460a-b366-78d7fad14558] Running
	I0317 13:37:34.939629  660439 system_pods.go:61] "kube-apiserver-test-preload-939223" [b34ce5bf-7b09-43b7-9d50-ddcad29f4157] Running
	I0317 13:37:34.939635  660439 system_pods.go:61] "kube-controller-manager-test-preload-939223" [5ef4f3b1-3534-4e86-865d-7e303a2084b3] Running
	I0317 13:37:34.939643  660439 system_pods.go:61] "kube-proxy-qsk5k" [68ffdc87-f7ac-4cbc-9e3e-e57e14601a19] Running
	I0317 13:37:34.939648  660439 system_pods.go:61] "kube-scheduler-test-preload-939223" [c5547b92-a18f-4539-a2be-73ea0b44dceb] Running
	I0317 13:37:34.939653  660439 system_pods.go:61] "storage-provisioner" [0d517b0a-2408-4f9d-9c81-17efc006d020] Running
	I0317 13:37:34.939661  660439 system_pods.go:74] duration metric: took 4.275045ms to wait for pod list to return data ...
	I0317 13:37:34.939670  660439 default_sa.go:34] waiting for default service account to be created ...
	I0317 13:37:35.084578  660439 default_sa.go:45] found service account: "default"
	I0317 13:37:35.084614  660439 default_sa.go:55] duration metric: took 144.926009ms for default service account to be created ...
	I0317 13:37:35.084625  660439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 13:37:35.285943  660439 system_pods.go:86] 7 kube-system pods found
	I0317 13:37:35.285977  660439 system_pods.go:89] "coredns-6d4b75cb6d-wrvgp" [4b626d8a-62f1-476a-a0ae-23607f6f7fd2] Running
	I0317 13:37:35.285983  660439 system_pods.go:89] "etcd-test-preload-939223" [eba62fa3-e4f8-460a-b366-78d7fad14558] Running
	I0317 13:37:35.285987  660439 system_pods.go:89] "kube-apiserver-test-preload-939223" [b34ce5bf-7b09-43b7-9d50-ddcad29f4157] Running
	I0317 13:37:35.285990  660439 system_pods.go:89] "kube-controller-manager-test-preload-939223" [5ef4f3b1-3534-4e86-865d-7e303a2084b3] Running
	I0317 13:37:35.285993  660439 system_pods.go:89] "kube-proxy-qsk5k" [68ffdc87-f7ac-4cbc-9e3e-e57e14601a19] Running
	I0317 13:37:35.285996  660439 system_pods.go:89] "kube-scheduler-test-preload-939223" [c5547b92-a18f-4539-a2be-73ea0b44dceb] Running
	I0317 13:37:35.285999  660439 system_pods.go:89] "storage-provisioner" [0d517b0a-2408-4f9d-9c81-17efc006d020] Running
	I0317 13:37:35.286006  660439 system_pods.go:126] duration metric: took 201.375628ms to wait for k8s-apps to be running ...
	I0317 13:37:35.286012  660439 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 13:37:35.286072  660439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:37:35.302877  660439 system_svc.go:56] duration metric: took 16.853831ms WaitForService to wait for kubelet
	I0317 13:37:35.302901  660439 kubeadm.go:582] duration metric: took 13.117208275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:37:35.302920  660439 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:37:35.485280  660439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:37:35.485305  660439 node_conditions.go:123] node cpu capacity is 2
	I0317 13:37:35.485316  660439 node_conditions.go:105] duration metric: took 182.391239ms to run NodePressure ...
	I0317 13:37:35.485328  660439 start.go:241] waiting for startup goroutines ...
	I0317 13:37:35.485334  660439 start.go:246] waiting for cluster config update ...
	I0317 13:37:35.485346  660439 start.go:255] writing updated cluster config ...
	I0317 13:37:35.485677  660439 ssh_runner.go:195] Run: rm -f paused
	I0317 13:37:35.533933  660439 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0317 13:37:35.535825  660439 out.go:201] 
	W0317 13:37:35.537298  660439 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0317 13:37:35.538679  660439 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0317 13:37:35.540149  660439 out.go:177] * Done! kubectl is now configured to use "test-preload-939223" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.386946594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742218656386920834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ec953a0-c311-46db-93ad-915cc0ba0d13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.387734721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78df58a9-efd8-485a-b08a-307b4c2caf6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.387798857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78df58a9-efd8-485a-b08a-307b4c2caf6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.387964506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1df00966b8eff7f9a31378326dc2d463281151e25d292a5b59903b09298db1f6,PodSandboxId:86155a6f1411565bdead8c703f4823579f9b5af99cbfb2bda99da7823bd333cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1742218647930563885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wrvgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b626d8a-62f1-476a-a0ae-23607f6f7fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 2b3c843e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edb56650ce32cb62d8e916be1ec99355fe6abafa03ae763978e51241a8d7da9,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742218640979012341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1742218640829582040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f5338b76dbcf9650526b56dfe28015b5ca50b4682917e1ce6fd12490047375,PodSandboxId:71533e3aa79ecd76eba2e8ae4392ce2653ce814f325512a0f7e98cda7910f17c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1742218640528323412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsk5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ffdc87-f7ac-4
cbc-9e3e-e57e14601a19,},Annotations:map[string]string{io.kubernetes.container.hash: 72bfd8d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98219747dd4ed816340bb46f29f4d221f00b48d2ef23f9b22001ce3b53ad60b7,PodSandboxId:6b88dee05f652dd7f0857351e330e70b8bdd93d43338e6ad72992ae4cd472336,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1742218635580290636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e16b76ab8e6346f5a4f09841158f618,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1006b59f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f54cd72812e7abca22f10bbb43e3bc94e35d041589c3ca826821585a18ee5f,PodSandboxId:8bdc03b2a07457fc88d5cbfdc0baeec962b6a120438a784cac9b645d6ca63af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1742218635562329006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ab94eb485be3295ff2313e891d8dd,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071091b5f5ed0cad175c86117c1d13f76564fbd35c1d02c2441be2d2904dac2c,PodSandboxId:003e85b5123621b46e2637914b5f2439abcda3610aa6a6736582b6706d0f7346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1742218635533692234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8054391ef1cde847b461cc85a4eb36e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886d0531e325e9351e5250796e3e59aa9b2b339338368be41f902303479ab81,PodSandboxId:0b678b067251d064c246fda86624a95f73a0e215e2e6d1fec6b22eada88fb16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1742218635507517014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d80c5396b1ef739bff57a8f6c97727,},Annotations:map[string]
string{io.kubernetes.container.hash: fa3bcd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78df58a9-efd8-485a-b08a-307b4c2caf6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.422118928Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28549534-b92a-48d9-b225-9fbd62c2ff5a name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.422199652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28549534-b92a-48d9-b225-9fbd62c2ff5a name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.423203142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aee0a55e-339c-4aab-912e-8983084371da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.423668851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742218656423605409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aee0a55e-339c-4aab-912e-8983084371da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.424319653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17c4df9c-611d-4094-8843-c889b0193097 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.424381306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17c4df9c-611d-4094-8843-c889b0193097 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.424576249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1df00966b8eff7f9a31378326dc2d463281151e25d292a5b59903b09298db1f6,PodSandboxId:86155a6f1411565bdead8c703f4823579f9b5af99cbfb2bda99da7823bd333cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1742218647930563885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wrvgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b626d8a-62f1-476a-a0ae-23607f6f7fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 2b3c843e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edb56650ce32cb62d8e916be1ec99355fe6abafa03ae763978e51241a8d7da9,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742218640979012341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1742218640829582040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f5338b76dbcf9650526b56dfe28015b5ca50b4682917e1ce6fd12490047375,PodSandboxId:71533e3aa79ecd76eba2e8ae4392ce2653ce814f325512a0f7e98cda7910f17c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1742218640528323412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsk5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ffdc87-f7ac-4
cbc-9e3e-e57e14601a19,},Annotations:map[string]string{io.kubernetes.container.hash: 72bfd8d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98219747dd4ed816340bb46f29f4d221f00b48d2ef23f9b22001ce3b53ad60b7,PodSandboxId:6b88dee05f652dd7f0857351e330e70b8bdd93d43338e6ad72992ae4cd472336,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1742218635580290636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e16b76ab8e6346f5a4f09841158f618,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1006b59f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f54cd72812e7abca22f10bbb43e3bc94e35d041589c3ca826821585a18ee5f,PodSandboxId:8bdc03b2a07457fc88d5cbfdc0baeec962b6a120438a784cac9b645d6ca63af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1742218635562329006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ab94eb485be3295ff2313e891d8dd,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071091b5f5ed0cad175c86117c1d13f76564fbd35c1d02c2441be2d2904dac2c,PodSandboxId:003e85b5123621b46e2637914b5f2439abcda3610aa6a6736582b6706d0f7346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1742218635533692234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8054391ef1cde847b461cc85a4eb36e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886d0531e325e9351e5250796e3e59aa9b2b339338368be41f902303479ab81,PodSandboxId:0b678b067251d064c246fda86624a95f73a0e215e2e6d1fec6b22eada88fb16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1742218635507517014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d80c5396b1ef739bff57a8f6c97727,},Annotations:map[string]
string{io.kubernetes.container.hash: fa3bcd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17c4df9c-611d-4094-8843-c889b0193097 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.458314595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a559aca7-872d-4651-b708-bde962a41770 name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.458398050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a559aca7-872d-4651-b708-bde962a41770 name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.459185415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e02fed9-9aea-4dd4-8104-02d4b729503a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.459621892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742218656459599657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e02fed9-9aea-4dd4-8104-02d4b729503a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.460115487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58776385-5ec8-41fc-a3aa-654c86773fe4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.460315120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58776385-5ec8-41fc-a3aa-654c86773fe4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.460554264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1df00966b8eff7f9a31378326dc2d463281151e25d292a5b59903b09298db1f6,PodSandboxId:86155a6f1411565bdead8c703f4823579f9b5af99cbfb2bda99da7823bd333cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1742218647930563885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wrvgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b626d8a-62f1-476a-a0ae-23607f6f7fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 2b3c843e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edb56650ce32cb62d8e916be1ec99355fe6abafa03ae763978e51241a8d7da9,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742218640979012341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1742218640829582040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f5338b76dbcf9650526b56dfe28015b5ca50b4682917e1ce6fd12490047375,PodSandboxId:71533e3aa79ecd76eba2e8ae4392ce2653ce814f325512a0f7e98cda7910f17c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1742218640528323412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsk5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ffdc87-f7ac-4
cbc-9e3e-e57e14601a19,},Annotations:map[string]string{io.kubernetes.container.hash: 72bfd8d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98219747dd4ed816340bb46f29f4d221f00b48d2ef23f9b22001ce3b53ad60b7,PodSandboxId:6b88dee05f652dd7f0857351e330e70b8bdd93d43338e6ad72992ae4cd472336,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1742218635580290636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e16b76ab8e6346f5a4f09841158f618,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1006b59f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f54cd72812e7abca22f10bbb43e3bc94e35d041589c3ca826821585a18ee5f,PodSandboxId:8bdc03b2a07457fc88d5cbfdc0baeec962b6a120438a784cac9b645d6ca63af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1742218635562329006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ab94eb485be3295ff2313e891d8dd,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071091b5f5ed0cad175c86117c1d13f76564fbd35c1d02c2441be2d2904dac2c,PodSandboxId:003e85b5123621b46e2637914b5f2439abcda3610aa6a6736582b6706d0f7346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1742218635533692234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8054391ef1cde847b461cc85a4eb36e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886d0531e325e9351e5250796e3e59aa9b2b339338368be41f902303479ab81,PodSandboxId:0b678b067251d064c246fda86624a95f73a0e215e2e6d1fec6b22eada88fb16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1742218635507517014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d80c5396b1ef739bff57a8f6c97727,},Annotations:map[string]
string{io.kubernetes.container.hash: fa3bcd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58776385-5ec8-41fc-a3aa-654c86773fe4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.489886697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa452fb1-2c20-4949-a5ee-c0d4cc0eb71d name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.489955832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa452fb1-2c20-4949-a5ee-c0d4cc0eb71d name=/runtime.v1.RuntimeService/Version
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.490856781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5264d237-1ec6-46bf-8878-095831248dd9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.491417692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742218656491387964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5264d237-1ec6-46bf-8878-095831248dd9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.492071710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9025bff3-5424-4e79-b8ae-89118277cead name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.492121399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9025bff3-5424-4e79-b8ae-89118277cead name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:37:36 test-preload-939223 crio[668]: time="2025-03-17 13:37:36.492300281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1df00966b8eff7f9a31378326dc2d463281151e25d292a5b59903b09298db1f6,PodSandboxId:86155a6f1411565bdead8c703f4823579f9b5af99cbfb2bda99da7823bd333cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1742218647930563885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wrvgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b626d8a-62f1-476a-a0ae-23607f6f7fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 2b3c843e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edb56650ce32cb62d8e916be1ec99355fe6abafa03ae763978e51241a8d7da9,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742218640979012341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c,PodSandboxId:5bb9799feb6d2784ee4f6a289bffdb141b49d55a5c4ae8cd6a7f786c9a501829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1742218640829582040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 0d517b0a-2408-4f9d-9c81-17efc006d020,},Annotations:map[string]string{io.kubernetes.container.hash: da3d5bb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f5338b76dbcf9650526b56dfe28015b5ca50b4682917e1ce6fd12490047375,PodSandboxId:71533e3aa79ecd76eba2e8ae4392ce2653ce814f325512a0f7e98cda7910f17c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1742218640528323412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qsk5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ffdc87-f7ac-4
cbc-9e3e-e57e14601a19,},Annotations:map[string]string{io.kubernetes.container.hash: 72bfd8d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98219747dd4ed816340bb46f29f4d221f00b48d2ef23f9b22001ce3b53ad60b7,PodSandboxId:6b88dee05f652dd7f0857351e330e70b8bdd93d43338e6ad72992ae4cd472336,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1742218635580290636,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e16b76ab8e6346f5a4f09841158f618,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1006b59f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f54cd72812e7abca22f10bbb43e3bc94e35d041589c3ca826821585a18ee5f,PodSandboxId:8bdc03b2a07457fc88d5cbfdc0baeec962b6a120438a784cac9b645d6ca63af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1742218635562329006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ab94eb485be3295ff2313e891d8dd,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071091b5f5ed0cad175c86117c1d13f76564fbd35c1d02c2441be2d2904dac2c,PodSandboxId:003e85b5123621b46e2637914b5f2439abcda3610aa6a6736582b6706d0f7346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1742218635533692234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8054391ef1cde847b461cc85a4eb36e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886d0531e325e9351e5250796e3e59aa9b2b339338368be41f902303479ab81,PodSandboxId:0b678b067251d064c246fda86624a95f73a0e215e2e6d1fec6b22eada88fb16f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1742218635507517014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-939223,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d80c5396b1ef739bff57a8f6c97727,},Annotations:map[string]
string{io.kubernetes.container.hash: fa3bcd7f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9025bff3-5424-4e79-b8ae-89118277cead name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1df00966b8eff       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   86155a6f14115       coredns-6d4b75cb6d-wrvgp
	6edb56650ce32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       4                   5bb9799feb6d2       storage-provisioner
	25acc9e047eeb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       3                   5bb9799feb6d2       storage-provisioner
	e3f5338b76dbc       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   71533e3aa79ec       kube-proxy-qsk5k
	98219747dd4ed       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   6b88dee05f652       etcd-test-preload-939223
	40f54cd72812e       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   8bdc03b2a0745       kube-controller-manager-test-preload-939223
	071091b5f5ed0       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   003e85b512362       kube-scheduler-test-preload-939223
	4886d0531e325       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   0b678b067251d       kube-apiserver-test-preload-939223
	
	
	==> coredns [1df00966b8eff7f9a31378326dc2d463281151e25d292a5b59903b09298db1f6] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:41395 - 39880 "HINFO IN 5456280843806500825.1532744558379390532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022412498s
	
	
	==> describe nodes <==
	Name:               test-preload-939223
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-939223
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=test-preload-939223
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T13_35_55_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 13:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-939223
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:37:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:37:29 +0000   Mon, 17 Mar 2025 13:35:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:37:29 +0000   Mon, 17 Mar 2025 13:35:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:37:29 +0000   Mon, 17 Mar 2025 13:35:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:37:29 +0000   Mon, 17 Mar 2025 13:37:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    test-preload-939223
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 15fe2ac8857e4930bd100985a7b4dec3
	  System UUID:                15fe2ac8-857e-4930-bd10-0985a7b4dec3
	  Boot ID:                    7df00680-d8b7-44ac-98bf-09e7419f4006
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wrvgp                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     88s
	  kube-system                 etcd-test-preload-939223                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-939223             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-939223    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-qsk5k                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-test-preload-939223             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x5 over 108s)  kubelet          Node test-preload-939223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s (x4 over 108s)  kubelet          Node test-preload-939223 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s (x4 over 108s)  kubelet          Node test-preload-939223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node test-preload-939223 status is now: NodeHasSufficientPID
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node test-preload-939223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-939223 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                91s                  kubelet          Node test-preload-939223 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node test-preload-939223 event: Registered Node test-preload-939223 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)    kubelet          Node test-preload-939223 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)    kubelet          Node test-preload-939223 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)    kubelet          Node test-preload-939223 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-939223 event: Registered Node test-preload-939223 in Controller
	
	
	==> dmesg <==
	[Mar17 13:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047519] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035591] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.793117] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.809509] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.524765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.414474] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061364] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047362] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.147939] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.122674] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.241897] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Mar17 13:37] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.060935] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.780180] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +4.634257] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.959416] systemd-fstab-generator[1819]: Ignoring "noauto" option for root device
	[  +5.514874] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [98219747dd4ed816340bb46f29f4d221f00b48d2ef23f9b22001ce3b53ad60b7] <==
	{"level":"info","ts":"2025-03-17T13:37:15.853Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6c80de388e5020e8","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-03-17T13:37:15.854Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-03-17T13:37:15.857Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:37:15.857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(7818493287602331880)"}
	{"level":"info","ts":"2025-03-17T13:37:15.858Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","added-peer-id":"6c80de388e5020e8","added-peer-peer-urls":["https://192.168.39.2:2380"]}
	{"level":"info","ts":"2025-03-17T13:37:15.858Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6c80de388e5020e8","initial-advertise-peer-urls":["https://192.168.39.2:2380"],"listen-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T13:37:15.858Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T13:37:15.858Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:37:15.859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:37:15.859Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2025-03-17T13:37:15.862Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 2"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 2"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became candidate at term 3"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgVoteResp from 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became leader at term 3"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c80de388e5020e8 elected leader 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6c80de388e5020e8","local-member-attributes":"{Name:test-preload-939223 ClientURLs:[https://192.168.39.2:2379]}","request-path":"/0/members/6c80de388e5020e8/attributes","cluster-id":"e20ba2e00cb0e827","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T13:37:17.117Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:37:17.119Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.2:2379"}
	{"level":"info","ts":"2025-03-17T13:37:17.119Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:37:17.120Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:37:17.120Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:37:17.120Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:37:36 up 0 min,  0 users,  load average: 0.50, 0.15, 0.05
	Linux test-preload-939223 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4886d0531e325e9351e5250796e3e59aa9b2b339338368be41f902303479ab81] <==
	I0317 13:37:19.388140       1 naming_controller.go:291] Starting NamingConditionController
	I0317 13:37:19.388415       1 establishing_controller.go:76] Starting EstablishingController
	I0317 13:37:19.388425       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0317 13:37:19.388431       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0317 13:37:19.388442       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0317 13:37:19.390128       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0317 13:37:19.402402       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0317 13:37:19.490687       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:37:19.503250       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0317 13:37:19.525921       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:37:19.544096       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0317 13:37:19.544593       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0317 13:37:19.545009       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0317 13:37:19.545356       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:37:19.558532       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0317 13:37:20.016686       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0317 13:37:20.345591       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 13:37:20.766438       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0317 13:37:21.110355       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0317 13:37:21.120840       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0317 13:37:21.149678       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0317 13:37:21.163170       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:37:21.168054       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:37:32.152254       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:37:32.242166       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [40f54cd72812e7abca22f10bbb43e3bc94e35d041589c3ca826821585a18ee5f] <==
	I0317 13:37:32.041853       1 shared_informer.go:262] Caches are synced for job
	I0317 13:37:32.050894       1 shared_informer.go:262] Caches are synced for disruption
	I0317 13:37:32.050947       1 disruption.go:371] Sending events to api server.
	I0317 13:37:32.055780       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0317 13:37:32.057309       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0317 13:37:32.057363       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0317 13:37:32.057582       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0317 13:37:32.060864       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0317 13:37:32.062896       1 shared_informer.go:262] Caches are synced for persistent volume
	I0317 13:37:32.087648       1 shared_informer.go:262] Caches are synced for attach detach
	I0317 13:37:32.107166       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0317 13:37:32.140311       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0317 13:37:32.164169       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0317 13:37:32.178246       1 shared_informer.go:262] Caches are synced for endpoint
	I0317 13:37:32.245228       1 shared_informer.go:262] Caches are synced for resource quota
	I0317 13:37:32.251377       1 shared_informer.go:262] Caches are synced for resource quota
	I0317 13:37:32.275715       1 shared_informer.go:262] Caches are synced for taint
	I0317 13:37:32.275881       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0317 13:37:32.275967       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-939223. Assuming now as a timestamp.
	I0317 13:37:32.276017       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0317 13:37:32.276279       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0317 13:37:32.276580       1 event.go:294] "Event occurred" object="test-preload-939223" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-939223 event: Registered Node test-preload-939223 in Controller"
	I0317 13:37:32.677694       1 shared_informer.go:262] Caches are synced for garbage collector
	I0317 13:37:32.733588       1 shared_informer.go:262] Caches are synced for garbage collector
	I0317 13:37:32.733700       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [e3f5338b76dbcf9650526b56dfe28015b5ca50b4682917e1ce6fd12490047375] <==
	I0317 13:37:20.720386       1 node.go:163] Successfully retrieved node IP: 192.168.39.2
	I0317 13:37:20.720596       1 server_others.go:138] "Detected node IP" address="192.168.39.2"
	I0317 13:37:20.720729       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0317 13:37:20.752396       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0317 13:37:20.752413       1 server_others.go:206] "Using iptables Proxier"
	I0317 13:37:20.753031       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0317 13:37:20.753567       1 server.go:661] "Version info" version="v1.24.4"
	I0317 13:37:20.753583       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:37:20.755068       1 config.go:317] "Starting service config controller"
	I0317 13:37:20.755276       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0317 13:37:20.755314       1 config.go:226] "Starting endpoint slice config controller"
	I0317 13:37:20.755319       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0317 13:37:20.756114       1 config.go:444] "Starting node config controller"
	I0317 13:37:20.756122       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0317 13:37:20.856725       1 shared_informer.go:262] Caches are synced for node config
	I0317 13:37:20.856796       1 shared_informer.go:262] Caches are synced for service config
	I0317 13:37:20.856830       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [071091b5f5ed0cad175c86117c1d13f76564fbd35c1d02c2441be2d2904dac2c] <==
	I0317 13:37:16.526016       1 serving.go:348] Generated self-signed cert in-memory
	W0317 13:37:19.411698       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:37:19.411787       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 13:37:19.411819       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:37:19.411838       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:37:19.447128       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0317 13:37:19.447158       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:37:19.456502       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0317 13:37:19.456780       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:37:19.457167       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:37:19.457266       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0317 13:37:19.557315       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.834016    1124 topology_manager.go:200] "Topology Admit Handler"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.834119    1124 topology_manager.go:200] "Topology Admit Handler"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.834194    1124 topology_manager.go:200] "Topology Admit Handler"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: E0317 13:37:19.836104    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wrvgp" podUID=4b626d8a-62f1-476a-a0ae-23607f6f7fd2
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.889887    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68ffdc87-f7ac-4cbc-9e3e-e57e14601a19-xtables-lock\") pod \"kube-proxy-qsk5k\" (UID: \"68ffdc87-f7ac-4cbc-9e3e-e57e14601a19\") " pod="kube-system/kube-proxy-qsk5k"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.889932    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68ffdc87-f7ac-4cbc-9e3e-e57e14601a19-lib-modules\") pod \"kube-proxy-qsk5k\" (UID: \"68ffdc87-f7ac-4cbc-9e3e-e57e14601a19\") " pod="kube-system/kube-proxy-qsk5k"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.889956    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume\") pod \"coredns-6d4b75cb6d-wrvgp\" (UID: \"4b626d8a-62f1-476a-a0ae-23607f6f7fd2\") " pod="kube-system/coredns-6d4b75cb6d-wrvgp"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890064    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6twm\" (UniqueName: \"kubernetes.io/projected/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-kube-api-access-b6twm\") pod \"coredns-6d4b75cb6d-wrvgp\" (UID: \"4b626d8a-62f1-476a-a0ae-23607f6f7fd2\") " pod="kube-system/coredns-6d4b75cb6d-wrvgp"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890168    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8pn\" (UniqueName: \"kubernetes.io/projected/68ffdc87-f7ac-4cbc-9e3e-e57e14601a19-kube-api-access-lr8pn\") pod \"kube-proxy-qsk5k\" (UID: \"68ffdc87-f7ac-4cbc-9e3e-e57e14601a19\") " pod="kube-system/kube-proxy-qsk5k"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890236    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvn6\" (UniqueName: \"kubernetes.io/projected/0d517b0a-2408-4f9d-9c81-17efc006d020-kube-api-access-2wvn6\") pod \"storage-provisioner\" (UID: \"0d517b0a-2408-4f9d-9c81-17efc006d020\") " pod="kube-system/storage-provisioner"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890271    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68ffdc87-f7ac-4cbc-9e3e-e57e14601a19-kube-proxy\") pod \"kube-proxy-qsk5k\" (UID: \"68ffdc87-f7ac-4cbc-9e3e-e57e14601a19\") " pod="kube-system/kube-proxy-qsk5k"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890327    1124 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d517b0a-2408-4f9d-9c81-17efc006d020-tmp\") pod \"storage-provisioner\" (UID: \"0d517b0a-2408-4f9d-9c81-17efc006d020\") " pod="kube-system/storage-provisioner"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: I0317 13:37:19.890346    1124 reconciler.go:159] "Reconciler: start to sync state"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: E0317 13:37:19.891064    1124 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: E0317 13:37:19.993942    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 17 13:37:19 test-preload-939223 kubelet[1124]: E0317 13:37:19.994205    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume podName:4b626d8a-62f1-476a-a0ae-23607f6f7fd2 nodeName:}" failed. No retries permitted until 2025-03-17 13:37:20.494102028 +0000 UTC m=+5.768160457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume") pod "coredns-6d4b75cb6d-wrvgp" (UID: "4b626d8a-62f1-476a-a0ae-23607f6f7fd2") : object "kube-system"/"coredns" not registered
	Mar 17 13:37:20 test-preload-939223 kubelet[1124]: E0317 13:37:20.497842    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 17 13:37:20 test-preload-939223 kubelet[1124]: E0317 13:37:20.497912    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume podName:4b626d8a-62f1-476a-a0ae-23607f6f7fd2 nodeName:}" failed. No retries permitted until 2025-03-17 13:37:21.49789881 +0000 UTC m=+6.771957233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume") pod "coredns-6d4b75cb6d-wrvgp" (UID: "4b626d8a-62f1-476a-a0ae-23607f6f7fd2") : object "kube-system"/"coredns" not registered
	Mar 17 13:37:20 test-preload-939223 kubelet[1124]: I0317 13:37:20.956930    1124 scope.go:110] "RemoveContainer" containerID="25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c"
	Mar 17 13:37:21 test-preload-939223 kubelet[1124]: E0317 13:37:21.505357    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 17 13:37:21 test-preload-939223 kubelet[1124]: E0317 13:37:21.505481    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume podName:4b626d8a-62f1-476a-a0ae-23607f6f7fd2 nodeName:}" failed. No retries permitted until 2025-03-17 13:37:23.50546517 +0000 UTC m=+8.779523580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume") pod "coredns-6d4b75cb6d-wrvgp" (UID: "4b626d8a-62f1-476a-a0ae-23607f6f7fd2") : object "kube-system"/"coredns" not registered
	Mar 17 13:37:21 test-preload-939223 kubelet[1124]: E0317 13:37:21.923521    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wrvgp" podUID=4b626d8a-62f1-476a-a0ae-23607f6f7fd2
	Mar 17 13:37:23 test-preload-939223 kubelet[1124]: E0317 13:37:23.518561    1124 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 17 13:37:23 test-preload-939223 kubelet[1124]: E0317 13:37:23.519048    1124 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume podName:4b626d8a-62f1-476a-a0ae-23607f6f7fd2 nodeName:}" failed. No retries permitted until 2025-03-17 13:37:27.519020283 +0000 UTC m=+12.793078693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4b626d8a-62f1-476a-a0ae-23607f6f7fd2-config-volume") pod "coredns-6d4b75cb6d-wrvgp" (UID: "4b626d8a-62f1-476a-a0ae-23607f6f7fd2") : object "kube-system"/"coredns" not registered
	Mar 17 13:37:23 test-preload-939223 kubelet[1124]: E0317 13:37:23.924251    1124 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wrvgp" podUID=4b626d8a-62f1-476a-a0ae-23607f6f7fd2
	
	
	==> storage-provisioner [25acc9e047eeb723c31b66ab8061af483051056f3a47d51ccc3ee44950a2ff5c] <==
	I0317 13:37:20.904744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0317 13:37:20.907725       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [6edb56650ce32cb62d8e916be1ec99355fe6abafa03ae763978e51241a8d7da9] <==
	I0317 13:37:21.094548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0317 13:37:21.115982       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0317 13:37:21.116079       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-939223 -n test-preload-939223
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-939223 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-939223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-939223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-939223: (1.149237581s)
--- FAIL: TestPreload (175.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (435.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.679246821s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-312638] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-312638" primary control-plane node in "kubernetes-upgrade-312638" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:39:32.670086  661927 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:39:32.672293  661927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:39:32.672314  661927 out.go:358] Setting ErrFile to fd 2...
	I0317 13:39:32.672320  661927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:39:32.672642  661927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:39:32.673934  661927 out.go:352] Setting JSON to false
	I0317 13:39:32.675197  661927 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12117,"bootTime":1742206656,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:39:32.675314  661927 start.go:139] virtualization: kvm guest
	I0317 13:39:32.676922  661927 out.go:177] * [kubernetes-upgrade-312638] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:39:32.678705  661927 notify.go:220] Checking for updates...
	I0317 13:39:32.678743  661927 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:39:32.680859  661927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:39:32.682180  661927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:39:32.684825  661927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:39:32.686328  661927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:39:32.687804  661927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:39:32.689171  661927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:39:32.741554  661927 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:39:32.742839  661927 start.go:297] selected driver: kvm2
	I0317 13:39:32.742856  661927 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:39:32.742873  661927 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:39:32.743945  661927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:39:32.752777  661927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:39:32.770446  661927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:39:32.770519  661927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:39:32.770821  661927 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 13:39:32.770865  661927 cni.go:84] Creating CNI manager for ""
	I0317 13:39:32.770936  661927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:39:32.770950  661927 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:39:32.771016  661927 start.go:340] cluster config:
	{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:39:32.771149  661927 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:39:32.772764  661927 out.go:177] * Starting "kubernetes-upgrade-312638" primary control-plane node in "kubernetes-upgrade-312638" cluster
	I0317 13:39:32.773960  661927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:39:32.774003  661927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0317 13:39:32.774014  661927 cache.go:56] Caching tarball of preloaded images
	I0317 13:39:32.774096  661927 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:39:32.774109  661927 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0317 13:39:32.774502  661927 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/config.json ...
	I0317 13:39:32.774531  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/config.json: {Name:mk8d1541eb7d815f81f698dc46dca0b50180abcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:39:32.774688  661927 start.go:360] acquireMachinesLock for kubernetes-upgrade-312638: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:39:58.143723  661927 start.go:364] duration metric: took 25.368993313s to acquireMachinesLock for "kubernetes-upgrade-312638"
	I0317 13:39:58.143793  661927 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:39:58.143910  661927 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:39:58.145963  661927 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 13:39:58.146121  661927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:39:58.146153  661927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:39:58.163584  661927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0317 13:39:58.164113  661927 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:39:58.164739  661927 main.go:141] libmachine: Using API Version  1
	I0317 13:39:58.164761  661927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:39:58.165144  661927 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:39:58.165347  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:39:58.165528  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:39:58.165675  661927 start.go:159] libmachine.API.Create for "kubernetes-upgrade-312638" (driver="kvm2")
	I0317 13:39:58.165705  661927 client.go:168] LocalClient.Create starting
	I0317 13:39:58.165742  661927 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:39:58.165779  661927 main.go:141] libmachine: Decoding PEM data...
	I0317 13:39:58.165799  661927 main.go:141] libmachine: Parsing certificate...
	I0317 13:39:58.165865  661927 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:39:58.165893  661927 main.go:141] libmachine: Decoding PEM data...
	I0317 13:39:58.165909  661927 main.go:141] libmachine: Parsing certificate...
	I0317 13:39:58.165934  661927 main.go:141] libmachine: Running pre-create checks...
	I0317 13:39:58.165947  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .PreCreateCheck
	I0317 13:39:58.166294  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetConfigRaw
	I0317 13:39:58.166708  661927 main.go:141] libmachine: Creating machine...
	I0317 13:39:58.166721  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .Create
	I0317 13:39:58.166884  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) creating KVM machine...
	I0317 13:39:58.166909  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) creating network...
	I0317 13:39:58.168035  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found existing default KVM network
	I0317 13:39:58.168775  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.168613  662257 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:51:0b:4e} reservation:<nil>}
	I0317 13:39:58.169526  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.169461  662257 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002093e0}
	I0317 13:39:58.169595  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | created network xml: 
	I0317 13:39:58.169616  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | <network>
	I0317 13:39:58.169631  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   <name>mk-kubernetes-upgrade-312638</name>
	I0317 13:39:58.169643  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   <dns enable='no'/>
	I0317 13:39:58.169652  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   
	I0317 13:39:58.169661  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0317 13:39:58.169669  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |     <dhcp>
	I0317 13:39:58.169679  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0317 13:39:58.169687  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |     </dhcp>
	I0317 13:39:58.169693  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   </ip>
	I0317 13:39:58.169707  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG |   
	I0317 13:39:58.169717  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | </network>
	I0317 13:39:58.169727  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | 
	I0317 13:39:58.175134  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | trying to create private KVM network mk-kubernetes-upgrade-312638 192.168.50.0/24...
	I0317 13:39:58.244621  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | private KVM network mk-kubernetes-upgrade-312638 192.168.50.0/24 created
	I0317 13:39:58.244657  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638 ...
	I0317 13:39:58.244671  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.244612  662257 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:39:58.244685  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:39:58.244794  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:39:58.519562  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.519417  662257 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa...
	I0317 13:39:58.751926  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.751749  662257 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/kubernetes-upgrade-312638.rawdisk...
	I0317 13:39:58.751988  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | Writing magic tar header
	I0317 13:39:58.752010  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638 (perms=drwx------)
	I0317 13:39:58.752021  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | Writing SSH key tar header
	I0317 13:39:58.752035  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:39:58.751877  662257 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638 ...
	I0317 13:39:58.752046  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638
	I0317 13:39:58.752054  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:39:58.752061  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:39:58.752078  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:39:58.752094  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:39:58.752107  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:39:58.752119  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:39:58.752130  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:39:58.752137  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:39:58.752145  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:39:58.752157  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) creating domain...
	I0317 13:39:58.752186  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home/jenkins
	I0317 13:39:58.752204  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | checking permissions on dir: /home
	I0317 13:39:58.752212  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | skipping /home - not owner
	I0317 13:39:58.753379  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) define libvirt domain using xml: 
	I0317 13:39:58.753420  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) <domain type='kvm'>
	I0317 13:39:58.753436  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <name>kubernetes-upgrade-312638</name>
	I0317 13:39:58.753449  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <memory unit='MiB'>2200</memory>
	I0317 13:39:58.753460  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <vcpu>2</vcpu>
	I0317 13:39:58.753473  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <features>
	I0317 13:39:58.753484  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <acpi/>
	I0317 13:39:58.753496  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <apic/>
	I0317 13:39:58.753505  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <pae/>
	I0317 13:39:58.753514  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     
	I0317 13:39:58.753525  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   </features>
	I0317 13:39:58.753536  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <cpu mode='host-passthrough'>
	I0317 13:39:58.753546  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   
	I0317 13:39:58.753557  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   </cpu>
	I0317 13:39:58.753569  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <os>
	I0317 13:39:58.753583  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <type>hvm</type>
	I0317 13:39:58.753594  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <boot dev='cdrom'/>
	I0317 13:39:58.753604  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <boot dev='hd'/>
	I0317 13:39:58.753616  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <bootmenu enable='no'/>
	I0317 13:39:58.753625  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   </os>
	I0317 13:39:58.753636  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   <devices>
	I0317 13:39:58.753648  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <disk type='file' device='cdrom'>
	I0317 13:39:58.753662  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/boot2docker.iso'/>
	I0317 13:39:58.753678  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <target dev='hdc' bus='scsi'/>
	I0317 13:39:58.753690  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <readonly/>
	I0317 13:39:58.753701  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </disk>
	I0317 13:39:58.753712  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <disk type='file' device='disk'>
	I0317 13:39:58.753727  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:39:58.753743  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/kubernetes-upgrade-312638.rawdisk'/>
	I0317 13:39:58.753755  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <target dev='hda' bus='virtio'/>
	I0317 13:39:58.753767  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </disk>
	I0317 13:39:58.753779  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <interface type='network'>
	I0317 13:39:58.753790  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <source network='mk-kubernetes-upgrade-312638'/>
	I0317 13:39:58.753810  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <model type='virtio'/>
	I0317 13:39:58.753821  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </interface>
	I0317 13:39:58.753829  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <interface type='network'>
	I0317 13:39:58.753844  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <source network='default'/>
	I0317 13:39:58.753857  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <model type='virtio'/>
	I0317 13:39:58.753867  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </interface>
	I0317 13:39:58.753877  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <serial type='pty'>
	I0317 13:39:58.753887  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <target port='0'/>
	I0317 13:39:58.753897  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </serial>
	I0317 13:39:58.753906  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <console type='pty'>
	I0317 13:39:58.753916  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <target type='serial' port='0'/>
	I0317 13:39:58.753930  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </console>
	I0317 13:39:58.753952  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     <rng model='virtio'>
	I0317 13:39:58.753968  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)       <backend model='random'>/dev/random</backend>
	I0317 13:39:58.753977  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     </rng>
	I0317 13:39:58.753981  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     
	I0317 13:39:58.753988  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)     
	I0317 13:39:58.753992  661927 main.go:141] libmachine: (kubernetes-upgrade-312638)   </devices>
	I0317 13:39:58.753998  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) </domain>
	I0317 13:39:58.754003  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) 
	I0317 13:39:58.760685  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:76:5f:7a in network default
	I0317 13:39:58.761318  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) starting domain...
	I0317 13:39:58.761348  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:39:58.761357  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) ensuring networks are active...
	I0317 13:39:58.762306  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network default is active
	I0317 13:39:58.762595  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network mk-kubernetes-upgrade-312638 is active
	I0317 13:39:58.763090  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) getting domain XML...
	I0317 13:39:58.763905  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) creating domain...
	I0317 13:40:00.107648  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) waiting for IP...
	I0317 13:40:00.108523  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.109004  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.109069  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:00.109016  662257 retry.go:31] will retry after 211.765636ms: waiting for domain to come up
	I0317 13:40:00.322723  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.323258  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.323289  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:00.323220  662257 retry.go:31] will retry after 284.428379ms: waiting for domain to come up
	I0317 13:40:00.610033  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.610602  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:00.610637  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:00.610536  662257 retry.go:31] will retry after 399.121597ms: waiting for domain to come up
	I0317 13:40:01.011061  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:01.011578  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:01.011609  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:01.011525  662257 retry.go:31] will retry after 497.97411ms: waiting for domain to come up
	I0317 13:40:01.511323  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:01.511777  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:01.511805  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:01.511745  662257 retry.go:31] will retry after 622.130267ms: waiting for domain to come up
	I0317 13:40:02.135144  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:02.135603  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:02.135689  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:02.135591  662257 retry.go:31] will retry after 612.927848ms: waiting for domain to come up
	I0317 13:40:02.751929  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:02.752096  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:02.752123  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:02.751989  662257 retry.go:31] will retry after 924.681893ms: waiting for domain to come up
	I0317 13:40:03.678313  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:03.678839  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:03.678879  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:03.678814  662257 retry.go:31] will retry after 1.057569916s: waiting for domain to come up
	I0317 13:40:04.738142  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:04.738602  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:04.738630  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:04.738555  662257 retry.go:31] will retry after 1.706297604s: waiting for domain to come up
	I0317 13:40:06.447347  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:06.447801  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:06.447829  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:06.447775  662257 retry.go:31] will retry after 1.805622338s: waiting for domain to come up
	I0317 13:40:08.255199  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:08.255784  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:08.255814  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:08.255751  662257 retry.go:31] will retry after 2.141330488s: waiting for domain to come up
	I0317 13:40:10.400240  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:10.400648  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:10.400689  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:10.400586  662257 retry.go:31] will retry after 2.575803624s: waiting for domain to come up
	I0317 13:40:12.979275  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:12.979840  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:12.979865  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:12.979793  662257 retry.go:31] will retry after 2.836941652s: waiting for domain to come up
	I0317 13:40:15.820227  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:15.820683  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:40:15.820726  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:40:15.820645  662257 retry.go:31] will retry after 5.248573719s: waiting for domain to come up
	I0317 13:40:21.072489  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.073002  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) found domain IP: 192.168.50.55
	I0317 13:40:21.073026  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) reserving static IP address...
	I0317 13:40:21.073047  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has current primary IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.073426  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-312638", mac: "52:54:00:2a:ac:41", ip: "192.168.50.55"} in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.149104  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | Getting to WaitForSSH function...
	I0317 13:40:21.149141  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) reserved static IP address 192.168.50.55 for domain kubernetes-upgrade-312638
	I0317 13:40:21.149159  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) waiting for SSH...
	I0317 13:40:21.152636  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.153041  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.153086  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.153193  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | Using SSH client type: external
	I0317 13:40:21.153213  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa (-rw-------)
	I0317 13:40:21.153238  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:40:21.153249  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | About to run SSH command:
	I0317 13:40:21.153257  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | exit 0
	I0317 13:40:21.283287  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | SSH cmd err, output: <nil>: 
	I0317 13:40:21.283598  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) KVM machine creation complete
	I0317 13:40:21.283967  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetConfigRaw
	I0317 13:40:21.284545  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:21.284719  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:21.284885  661927 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:40:21.284899  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetState
	I0317 13:40:21.286145  661927 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:40:21.286158  661927 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:40:21.286163  661927 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:40:21.286169  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.288247  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.288557  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.288599  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.288685  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:21.288864  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.289037  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.289166  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:21.289303  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:21.289552  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:21.289563  661927 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:40:21.398599  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:40:21.398628  661927 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:40:21.398640  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.401392  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.401783  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.401814  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.401989  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:21.402191  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.402370  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.402497  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:21.402691  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:21.402895  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:21.402906  661927 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:40:21.515971  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:40:21.516057  661927 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:40:21.516069  661927 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:40:21.516077  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:40:21.516341  661927 buildroot.go:166] provisioning hostname "kubernetes-upgrade-312638"
	I0317 13:40:21.516366  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:40:21.516565  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.519116  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.519417  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.519458  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.519648  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:21.519840  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.519989  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.520108  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:21.520251  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:21.520454  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:21.520467  661927 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-312638 && echo "kubernetes-upgrade-312638" | sudo tee /etc/hostname
	I0317 13:40:21.644204  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-312638
	
	I0317 13:40:21.644246  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.646814  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.647140  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.647173  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.647338  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:21.647546  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.647729  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.647860  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:21.648025  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:21.648226  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:21.648243  661927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-312638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-312638/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-312638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:40:21.767640  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:40:21.767669  661927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:40:21.767708  661927 buildroot.go:174] setting up certificates
	I0317 13:40:21.767719  661927 provision.go:84] configureAuth start
	I0317 13:40:21.767734  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:40:21.768028  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:40:21.770491  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.770791  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.770813  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.770964  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.773117  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.773439  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.773492  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.773586  661927 provision.go:143] copyHostCerts
	I0317 13:40:21.773642  661927 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:40:21.773666  661927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:40:21.773732  661927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:40:21.773851  661927 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:40:21.773861  661927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:40:21.773892  661927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:40:21.773969  661927 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:40:21.773990  661927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:40:21.774023  661927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:40:21.774389  661927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-312638 san=[127.0.0.1 192.168.50.55 kubernetes-upgrade-312638 localhost minikube]
	I0317 13:40:21.908082  661927 provision.go:177] copyRemoteCerts
	I0317 13:40:21.908143  661927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:40:21.908172  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:21.910706  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.910998  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:21.911022  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:21.911236  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:21.911457  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:21.911637  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:21.911787  661927 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:40:21.997262  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:40:22.020564  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:40:22.042809  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:40:22.067197  661927 provision.go:87] duration metric: took 299.465952ms to configureAuth
	I0317 13:40:22.067221  661927 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:40:22.067411  661927 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:40:22.067502  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:22.070071  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.070382  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.070411  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.070595  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:22.070800  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.070982  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.071135  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:22.071259  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:22.071468  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:22.071484  661927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:40:22.292979  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:40:22.293012  661927 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:40:22.293021  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetURL
	I0317 13:40:22.294333  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | using libvirt version 6000000
	I0317 13:40:22.296570  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.296917  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.296948  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.297071  661927 main.go:141] libmachine: Docker is up and running!
	I0317 13:40:22.297087  661927 main.go:141] libmachine: Reticulating splines...
	I0317 13:40:22.297095  661927 client.go:171] duration metric: took 24.131377743s to LocalClient.Create
	I0317 13:40:22.297128  661927 start.go:167] duration metric: took 24.131454968s to libmachine.API.Create "kubernetes-upgrade-312638"
	I0317 13:40:22.297142  661927 start.go:293] postStartSetup for "kubernetes-upgrade-312638" (driver="kvm2")
	I0317 13:40:22.297155  661927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:40:22.297181  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:22.297439  661927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:40:22.297472  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:22.299586  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.299915  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.299941  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.300110  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:22.300286  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.300437  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:22.300585  661927 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:40:22.385340  661927 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:40:22.389329  661927 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:40:22.389383  661927 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:40:22.389522  661927 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:40:22.389608  661927 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:40:22.389690  661927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:40:22.398365  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:40:22.419586  661927 start.go:296] duration metric: took 122.428908ms for postStartSetup
	I0317 13:40:22.419639  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetConfigRaw
	I0317 13:40:22.420191  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:40:22.422583  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.422936  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.422967  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.423196  661927 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/config.json ...
	I0317 13:40:22.423378  661927 start.go:128] duration metric: took 24.279454559s to createHost
	I0317 13:40:22.423400  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:22.425437  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.425764  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.425792  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.425943  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:22.426116  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.426320  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.426502  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:22.426697  661927 main.go:141] libmachine: Using SSH client type: native
	I0317 13:40:22.426902  661927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:40:22.426913  661927 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:40:22.540056  661927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742218822.518125341
	
	I0317 13:40:22.540083  661927 fix.go:216] guest clock: 1742218822.518125341
	I0317 13:40:22.540091  661927 fix.go:229] Guest: 2025-03-17 13:40:22.518125341 +0000 UTC Remote: 2025-03-17 13:40:22.423389851 +0000 UTC m=+49.812749757 (delta=94.73549ms)
	I0317 13:40:22.540113  661927 fix.go:200] guest clock delta is within tolerance: 94.73549ms
	I0317 13:40:22.540118  661927 start.go:83] releasing machines lock for "kubernetes-upgrade-312638", held for 24.39636385s
	I0317 13:40:22.540145  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:22.540422  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:40:22.543183  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.543685  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.543710  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.543926  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:22.544453  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:22.544629  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:40:22.544711  661927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:40:22.544762  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:22.544861  661927 ssh_runner.go:195] Run: cat /version.json
	I0317 13:40:22.544888  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:40:22.547281  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.547560  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.547633  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.547666  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.547815  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:22.547911  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:22.547950  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:22.548020  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.548099  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:40:22.548173  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:22.548243  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:40:22.548323  661927 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:40:22.548392  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:40:22.548548  661927 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:40:22.632297  661927 ssh_runner.go:195] Run: systemctl --version
	I0317 13:40:22.655569  661927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:40:22.817593  661927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:40:22.823384  661927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:40:22.823468  661927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:40:22.839103  661927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:40:22.839136  661927 start.go:495] detecting cgroup driver to use...
	I0317 13:40:22.839214  661927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:40:22.855261  661927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:40:22.868730  661927 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:40:22.868806  661927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:40:22.882303  661927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:40:22.896079  661927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:40:23.007307  661927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:40:23.182347  661927 docker.go:233] disabling docker service ...
	I0317 13:40:23.182435  661927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:40:23.195681  661927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:40:23.207321  661927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:40:23.323075  661927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:40:23.438980  661927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:40:23.458037  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:40:23.476454  661927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0317 13:40:23.476538  661927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:40:23.487263  661927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:40:23.487368  661927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:40:23.497036  661927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:40:23.506190  661927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:40:23.515432  661927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:40:23.525017  661927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:40:23.533782  661927 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:40:23.533855  661927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:40:23.545549  661927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:40:23.554890  661927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:40:23.680402  661927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:40:23.763567  661927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:40:23.763663  661927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:40:23.768065  661927 start.go:563] Will wait 60s for crictl version
	I0317 13:40:23.768116  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:23.771441  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:40:23.805735  661927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:40:23.805832  661927 ssh_runner.go:195] Run: crio --version
	I0317 13:40:23.831010  661927 ssh_runner.go:195] Run: crio --version
	I0317 13:40:23.859685  661927 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0317 13:40:23.861132  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:40:23.864500  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:23.866080  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:40:12 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:40:23.866106  661927 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:40:23.866458  661927 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0317 13:40:23.870655  661927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:40:23.884889  661927 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:40:23.885026  661927 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:40:23.885085  661927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:40:23.915643  661927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:40:23.915704  661927 ssh_runner.go:195] Run: which lz4
	I0317 13:40:23.919319  661927 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:40:23.923070  661927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:40:23.923096  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0317 13:40:25.427495  661927 crio.go:462] duration metric: took 1.508221062s to copy over tarball
	I0317 13:40:25.427602  661927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:40:27.988118  661927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.560479745s)
	I0317 13:40:27.988149  661927 crio.go:469] duration metric: took 2.560625216s to extract the tarball
	I0317 13:40:27.988160  661927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:40:28.028766  661927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:40:28.077761  661927 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:40:28.077791  661927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 13:40:28.077888  661927 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.077919  661927 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.077930  661927 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.077947  661927 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.077929  661927 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.077912  661927 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.077899  661927 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:40:28.077977  661927 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0317 13:40:28.079728  661927 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.079739  661927 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:40:28.079753  661927 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.079758  661927 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0317 13:40:28.079733  661927 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.079730  661927 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.079739  661927 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.079736  661927 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.236101  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.237189  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.239068  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.248247  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.254246  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.264224  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0317 13:40:28.321868  661927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0317 13:40:28.321936  661927 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.321996  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.333732  661927 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0317 13:40:28.333791  661927 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.333792  661927 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0317 13:40:28.333832  661927 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.333869  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.333876  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.353166  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.379694  661927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0317 13:40:28.379752  661927 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.379799  661927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0317 13:40:28.379852  661927 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.379811  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.379901  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.382016  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.382056  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.382125  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.382222  661927 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0317 13:40:28.382260  661927 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0317 13:40:28.382292  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.426641  661927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0317 13:40:28.426692  661927 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.426734  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.426771  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.426739  661927 ssh_runner.go:195] Run: which crictl
	I0317 13:40:28.470376  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.470500  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:40:28.470547  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.481430  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.561003  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.561026  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.561005  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.606157  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:40:28.606207  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:40:28.606246  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:40:28.606451  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:40:28.690939  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:40:28.691049  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:40:28.691079  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.748193  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0317 13:40:28.748326  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:40:28.748541  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0317 13:40:28.749341  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0317 13:40:28.803080  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0317 13:40:28.805661  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0317 13:40:28.805694  661927 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:40:28.825641  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0317 13:40:28.848945  661927 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0317 13:40:29.832945  661927 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:40:29.976465  661927 cache_images.go:92] duration metric: took 1.898650781s to LoadCachedImages
	W0317 13:40:29.976566  661927 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0317 13:40:29.976583  661927 kubeadm.go:934] updating node { 192.168.50.55 8443 v1.20.0 crio true true} ...
	I0317 13:40:29.976703  661927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-312638 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:40:29.976790  661927 ssh_runner.go:195] Run: crio config
	I0317 13:40:30.028150  661927 cni.go:84] Creating CNI manager for ""
	I0317 13:40:30.028190  661927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:40:30.028207  661927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:40:30.028235  661927 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.55 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-312638 NodeName:kubernetes-upgrade-312638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0317 13:40:30.028403  661927 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-312638"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:40:30.028488  661927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0317 13:40:30.038571  661927 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:40:30.038638  661927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:40:30.048088  661927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0317 13:40:30.064092  661927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:40:30.080437  661927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0317 13:40:30.095932  661927 ssh_runner.go:195] Run: grep 192.168.50.55	control-plane.minikube.internal$ /etc/hosts
	I0317 13:40:30.099494  661927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:40:30.111223  661927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:40:30.226921  661927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:40:30.246890  661927 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638 for IP: 192.168.50.55
	I0317 13:40:30.246914  661927 certs.go:194] generating shared ca certs ...
	I0317 13:40:30.246934  661927 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:30.247094  661927 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:40:30.247181  661927 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:40:30.247199  661927 certs.go:256] generating profile certs ...
	I0317 13:40:30.247280  661927 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.key
	I0317 13:40:30.247308  661927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.crt with IP's: []
	I0317 13:40:30.349944  661927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.crt ...
	I0317 13:40:30.349976  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.crt: {Name:mk543061cb6db770fe3de4fe0e88c4b562815bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:30.350184  661927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.key ...
	I0317 13:40:30.350219  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.key: {Name:mk84a8af27bf80c23572a665af71f2adea5e9659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:30.350344  661927 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key.ef68222a
	I0317 13:40:30.350365  661927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt.ef68222a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.55]
	I0317 13:40:31.008325  661927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt.ef68222a ...
	I0317 13:40:31.008355  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt.ef68222a: {Name:mke4062e8b1ffc32382599b26a20280b0474e8e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:31.008526  661927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key.ef68222a ...
	I0317 13:40:31.008540  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key.ef68222a: {Name:mkf2f31bee9fc05d7e407af1a240e360bb4a2ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:31.008613  661927 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt.ef68222a -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt
	I0317 13:40:31.008686  661927 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key.ef68222a -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key
	I0317 13:40:31.008739  661927 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key
	I0317 13:40:31.008754  661927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.crt with IP's: []
	I0317 13:40:31.424498  661927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.crt ...
	I0317 13:40:31.424530  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.crt: {Name:mke3d40c4ee3ddabf07a2cdbb3a9c8bd613d1546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:31.424697  661927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key ...
	I0317 13:40:31.424715  661927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key: {Name:mk2119db6a435479ebbce149feca6c5cd48e1b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:40:31.424915  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:40:31.424952  661927 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:40:31.424961  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:40:31.424982  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:40:31.425003  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:40:31.425020  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:40:31.425055  661927 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:40:31.425733  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:40:31.450154  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:40:31.476303  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:40:31.500245  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:40:31.531261  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:40:31.556008  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:40:31.578286  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:40:31.605304  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:40:31.632468  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:40:31.654960  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:40:31.682075  661927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:40:31.708760  661927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:40:31.726654  661927 ssh_runner.go:195] Run: openssl version
	I0317 13:40:31.732054  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:40:31.742535  661927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:40:31.746867  661927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:40:31.746927  661927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:40:31.752908  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:40:31.762878  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:40:31.772670  661927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:40:31.777201  661927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:40:31.777256  661927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:40:31.782513  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:40:31.792713  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:40:31.803217  661927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:40:31.808900  661927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:40:31.808963  661927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:40:31.814928  661927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:40:31.828863  661927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:40:31.833740  661927 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:40:31.833795  661927 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:40:31.833867  661927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:40:31.833918  661927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:40:31.880211  661927 cri.go:89] found id: ""
	I0317 13:40:31.880313  661927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:40:31.890160  661927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:40:31.899701  661927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:40:31.909042  661927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:40:31.909064  661927 kubeadm.go:157] found existing configuration files:
	
	I0317 13:40:31.909113  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:40:31.919291  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:40:31.919348  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:40:31.928884  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:40:31.938051  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:40:31.938116  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:40:31.947527  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:40:31.956371  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:40:31.956429  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:40:31.965582  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:40:31.974402  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:40:31.974466  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:40:31.983509  661927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:40:32.088231  661927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:40:32.088434  661927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:40:32.280269  661927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:40:32.280477  661927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:40:32.280634  661927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:40:32.457478  661927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:40:32.460123  661927 out.go:235]   - Generating certificates and keys ...
	I0317 13:40:32.460231  661927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:40:32.460308  661927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:40:32.576610  661927 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:40:32.654291  661927 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:40:32.734336  661927 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:40:33.122867  661927 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:40:33.294897  661927 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:40:33.295155  661927 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	I0317 13:40:33.518201  661927 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:40:33.518430  661927 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	I0317 13:40:33.759221  661927 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:40:33.897184  661927 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:40:34.198203  661927 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:40:34.198531  661927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:40:34.396898  661927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:40:34.571596  661927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:40:34.697573  661927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:40:34.919093  661927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:40:34.937513  661927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:40:34.938396  661927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:40:34.938482  661927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:40:35.055160  661927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:40:35.056917  661927 out.go:235]   - Booting up control plane ...
	I0317 13:40:35.057096  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:40:35.064437  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:40:35.067833  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:40:35.069033  661927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:40:35.073621  661927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:41:15.068940  661927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 13:41:15.069877  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:41:15.070053  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:41:20.070232  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:41:20.070593  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:41:30.069824  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:41:30.070091  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:41:50.069628  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:41:50.069907  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:42:30.071245  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:42:30.071443  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:42:30.071452  661927 kubeadm.go:310] 
	I0317 13:42:30.071491  661927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 13:42:30.071545  661927 kubeadm.go:310] 		timed out waiting for the condition
	I0317 13:42:30.071556  661927 kubeadm.go:310] 
	I0317 13:42:30.071596  661927 kubeadm.go:310] 	This error is likely caused by:
	I0317 13:42:30.071626  661927 kubeadm.go:310] 		- The kubelet is not running
	I0317 13:42:30.071727  661927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 13:42:30.071734  661927 kubeadm.go:310] 
	I0317 13:42:30.071843  661927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 13:42:30.071925  661927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 13:42:30.071966  661927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 13:42:30.071973  661927 kubeadm.go:310] 
	I0317 13:42:30.072065  661927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 13:42:30.072143  661927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 13:42:30.072169  661927 kubeadm.go:310] 
	I0317 13:42:30.072309  661927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 13:42:30.072402  661927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 13:42:30.072486  661927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 13:42:30.072595  661927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 13:42:30.072615  661927 kubeadm.go:310] 
	I0317 13:42:30.073092  661927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:42:30.073207  661927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 13:42:30.073351  661927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0317 13:42:30.073455  661927 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-312638 localhost] and IPs [192.168.50.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0317 13:42:30.073516  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0317 13:42:31.012446  661927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:42:31.025959  661927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:42:31.036081  661927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:42:31.036101  661927 kubeadm.go:157] found existing configuration files:
	
	I0317 13:42:31.036149  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:42:31.044532  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:42:31.044589  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:42:31.053453  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:42:31.061639  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:42:31.061705  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:42:31.070811  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:42:31.079286  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:42:31.079352  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:42:31.088360  661927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:42:31.096801  661927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:42:31.096845  661927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:42:31.105629  661927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:42:31.184106  661927 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:42:31.184189  661927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:42:31.310284  661927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:42:31.310377  661927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:42:31.310452  661927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:42:31.465862  661927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:42:31.468769  661927 out.go:235]   - Generating certificates and keys ...
	I0317 13:42:31.468889  661927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:42:31.468977  661927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:42:31.469105  661927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 13:42:31.469196  661927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 13:42:31.469305  661927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 13:42:31.469386  661927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 13:42:31.469474  661927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 13:42:31.469582  661927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 13:42:31.469683  661927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 13:42:31.469804  661927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 13:42:31.469869  661927 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 13:42:31.469962  661927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:42:31.747474  661927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:42:32.103687  661927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:42:32.236719  661927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:42:32.381131  661927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:42:32.396357  661927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:42:32.396483  661927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:42:32.396563  661927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:42:32.545660  661927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:42:32.547319  661927 out.go:235]   - Booting up control plane ...
	I0317 13:42:32.547522  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:42:32.553428  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:42:32.554837  661927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:42:32.556454  661927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:42:32.559050  661927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:43:12.561609  661927 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 13:43:12.561839  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:43:12.562062  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:43:17.562878  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:43:17.563138  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:43:27.563751  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:43:27.563988  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:43:47.562714  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:43:47.563028  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:44:27.562210  661927 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:44:27.562574  661927 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:44:27.562618  661927 kubeadm.go:310] 
	I0317 13:44:27.562699  661927 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 13:44:27.562753  661927 kubeadm.go:310] 		timed out waiting for the condition
	I0317 13:44:27.562762  661927 kubeadm.go:310] 
	I0317 13:44:27.562807  661927 kubeadm.go:310] 	This error is likely caused by:
	I0317 13:44:27.562849  661927 kubeadm.go:310] 		- The kubelet is not running
	I0317 13:44:27.562972  661927 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 13:44:27.562984  661927 kubeadm.go:310] 
	I0317 13:44:27.563104  661927 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 13:44:27.563143  661927 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 13:44:27.563176  661927 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 13:44:27.563185  661927 kubeadm.go:310] 
	I0317 13:44:27.563306  661927 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 13:44:27.563405  661927 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 13:44:27.563418  661927 kubeadm.go:310] 
	I0317 13:44:27.563598  661927 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 13:44:27.563717  661927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 13:44:27.563873  661927 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 13:44:27.563992  661927 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 13:44:27.564005  661927 kubeadm.go:310] 
	I0317 13:44:27.564798  661927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:44:27.564938  661927 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 13:44:27.565096  661927 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 13:44:27.565109  661927 kubeadm.go:394] duration metric: took 3m55.731316792s to StartCluster
	I0317 13:44:27.565203  661927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:44:27.565287  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:44:27.611999  661927 cri.go:89] found id: ""
	I0317 13:44:27.612052  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.612064  661927 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:44:27.612073  661927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:44:27.612158  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:44:27.660974  661927 cri.go:89] found id: ""
	I0317 13:44:27.661021  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.661034  661927 logs.go:284] No container was found matching "etcd"
	I0317 13:44:27.661045  661927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:44:27.661130  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:44:27.708420  661927 cri.go:89] found id: ""
	I0317 13:44:27.708462  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.708476  661927 logs.go:284] No container was found matching "coredns"
	I0317 13:44:27.708487  661927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:44:27.708579  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:44:27.748484  661927 cri.go:89] found id: ""
	I0317 13:44:27.748518  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.748532  661927 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:44:27.748540  661927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:44:27.748614  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:44:27.788102  661927 cri.go:89] found id: ""
	I0317 13:44:27.788138  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.788157  661927 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:44:27.788166  661927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:44:27.788236  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:44:27.836308  661927 cri.go:89] found id: ""
	I0317 13:44:27.836336  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.836346  661927 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:44:27.836354  661927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:44:27.836427  661927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:44:27.873861  661927 cri.go:89] found id: ""
	I0317 13:44:27.873901  661927 logs.go:282] 0 containers: []
	W0317 13:44:27.873915  661927 logs.go:284] No container was found matching "kindnet"
	I0317 13:44:27.873928  661927 logs.go:123] Gathering logs for kubelet ...
	I0317 13:44:27.873944  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:44:27.926660  661927 logs.go:123] Gathering logs for dmesg ...
	I0317 13:44:27.926704  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:44:27.940944  661927 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:44:27.940995  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:44:28.082471  661927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:44:28.082500  661927 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:44:28.082519  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:44:28.211403  661927 logs.go:123] Gathering logs for container status ...
	I0317 13:44:28.211445  661927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0317 13:44:28.272152  661927 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 13:44:28.272232  661927 out.go:270] * 
	* 
	W0317 13:44:28.272310  661927 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 13:44:28.272330  661927 out.go:270] * 
	* 
	W0317 13:44:28.273188  661927 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 13:44:28.276565  661927 out.go:201] 
	W0317 13:44:28.277749  661927 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 13:44:28.277801  661927 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 13:44:28.277828  661927 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 13:44:28.279232  661927 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-312638
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-312638: (1.362985098s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-312638 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-312638 status --format={{.Host}}: exit status 7 (78.639965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.588748262s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-312638 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.774588ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-312638] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-312638
	    minikube start -p kubernetes-upgrade-312638 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3126382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-312638 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-312638 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.522723828s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-03-17 13:46:44.052758495 +0000 UTC m=+3923.538555197
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-312638 -n kubernetes-upgrade-312638
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-312638 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-312638 logs -n 25: (2.157204715s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-788750 sudo                  | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat              | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat              | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                  | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                  | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                  | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo find             | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo crio             | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-788750                       | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:43 UTC |
	| start   | -p cert-expiration-355456              | cert-expiration-355456    | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-880805                        | pause-880805              | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-662195            | force-systemd-env-662195  | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	| start   | -p force-systemd-flag-638911           | force-systemd-flag-638911 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC | 17 Mar 25 13:45 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-312638           | kubernetes-upgrade-312638 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC | 17 Mar 25 13:45 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-638911 ssh cat      | force-systemd-flag-638911 | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-638911           | force-systemd-flag-638911 | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	| start   | -p cert-options-197082                 | cert-options-197082       | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-880805                        | pause-880805              | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	| start   | -p old-k8s-version-803027              | old-k8s-version-803027    | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-312638           | kubernetes-upgrade-312638 | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-312638           | kubernetes-upgrade-312638 | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:46 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-197082 ssh                | cert-options-197082       | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-197082 -- sudo         | cert-options-197082       | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-197082                 | cert-options-197082       | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC | 17 Mar 25 13:45 UTC |
	| start   | -p no-preload-142429                   | no-preload-142429         | jenkins | v1.35.0 | 17 Mar 25 13:45 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:45:48
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:45:48.595820  669958 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:45:48.596109  669958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:48.596121  669958 out.go:358] Setting ErrFile to fd 2...
	I0317 13:45:48.596128  669958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:48.596310  669958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:45:48.596921  669958 out.go:352] Setting JSON to false
	I0317 13:45:48.597926  669958 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12493,"bootTime":1742206656,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:45:48.598033  669958 start.go:139] virtualization: kvm guest
	I0317 13:45:48.600118  669958 out.go:177] * [no-preload-142429] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:45:48.601672  669958 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:45:48.601697  669958 notify.go:220] Checking for updates...
	I0317 13:45:48.604303  669958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:45:48.605687  669958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:45:48.606830  669958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:45:48.608093  669958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:45:48.609172  669958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:45:48.610706  669958 config.go:182] Loaded profile config "cert-expiration-355456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:48.610798  669958 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:48.610876  669958 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:45:48.610967  669958 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:45:48.647414  669958 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:45:48.648651  669958 start.go:297] selected driver: kvm2
	I0317 13:45:48.648671  669958 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:45:48.648687  669958 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:45:48.649552  669958 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.649659  669958 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:45:48.671200  669958 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:45:48.671246  669958 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:45:48.671458  669958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:45:48.671492  669958 cni.go:84] Creating CNI manager for ""
	I0317 13:45:48.671557  669958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:45:48.671571  669958 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:45:48.671628  669958 start.go:340] cluster config:
	{Name:no-preload-142429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-142429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:45:48.671748  669958 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.673445  669958 out.go:177] * Starting "no-preload-142429" primary control-plane node in "no-preload-142429" cluster
	I0317 13:45:51.167057  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.167496  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has current primary IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.167524  669182 main.go:141] libmachine: (old-k8s-version-803027) found domain IP: 192.168.61.229
	I0317 13:45:51.167552  669182 main.go:141] libmachine: (old-k8s-version-803027) reserving static IP address...
	I0317 13:45:51.167850  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-803027", mac: "52:54:00:c7:07:e9", ip: "192.168.61.229"} in network mk-old-k8s-version-803027
	I0317 13:45:51.243022  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Getting to WaitForSSH function...
	I0317 13:45:51.243060  669182 main.go:141] libmachine: (old-k8s-version-803027) reserved static IP address 192.168.61.229 for domain old-k8s-version-803027
	I0317 13:45:51.243073  669182 main.go:141] libmachine: (old-k8s-version-803027) waiting for SSH...
	I0317 13:45:51.245500  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.245879  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027
	I0317 13:45:51.245907  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find defined IP address of network mk-old-k8s-version-803027 interface with MAC address 52:54:00:c7:07:e9
	I0317 13:45:51.246041  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH client type: external
	I0317 13:45:51.246070  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa (-rw-------)
	I0317 13:45:51.246105  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:45:51.246144  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | About to run SSH command:
	I0317 13:45:51.246162  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | exit 0
	I0317 13:45:51.249914  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | SSH cmd err, output: exit status 255: 
	I0317 13:45:51.249945  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0317 13:45:51.249976  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | command : exit 0
	I0317 13:45:51.249995  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | err     : exit status 255
	I0317 13:45:51.250027  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | output  : 
	I0317 13:45:48.674553  669958 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:45:48.674662  669958 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/config.json ...
	I0317 13:45:48.674694  669958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/config.json: {Name:mke6372933e7e16cef366f7d1de833e935d646d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:45:48.674752  669958 cache.go:107] acquiring lock: {Name:mk1553c3bceee6f8e07923233d079498e1a6b8e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674773  669958 cache.go:107] acquiring lock: {Name:mkba8898433823f2e0d32a62993d452bf20ce0df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674835  669958 cache.go:107] acquiring lock: {Name:mk037bd1c8797459b2f585b556695812e342c680 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674786  669958 cache.go:107] acquiring lock: {Name:mk411932b999887cb43733cb1bfb1a450f09a14b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674872  669958 cache.go:107] acquiring lock: {Name:mkf77448483b41c34f2923e2ea3b7daf59b58f51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674884  669958 start.go:360] acquireMachinesLock for no-preload-142429: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:45:48.674853  669958 cache.go:107] acquiring lock: {Name:mkfc2d8b45b06962edb8b36b2fd024d1cd572401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674826  669958 cache.go:107] acquiring lock: {Name:mk59a50d6d8853ee79c78218f65a05fe07c68f8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.674927  669958 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 13:45:48.674982  669958 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0317 13:45:48.675000  669958 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 13:45:48.674840  669958 cache.go:107] acquiring lock: {Name:mkaea491f76be4a0b55e10df2979ee2cc122575d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:48.675023  669958 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 13:45:48.675047  669958 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 13:45:48.675089  669958 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0317 13:45:48.674853  669958 cache.go:115] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0317 13:45:48.674937  669958 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 13:45:48.675207  669958 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 460.624µs
	I0317 13:45:48.675228  669958 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0317 13:45:48.676307  669958 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0317 13:45:48.676315  669958 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0317 13:45:48.676308  669958 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0317 13:45:48.676344  669958 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0317 13:45:48.676330  669958 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 13:45:48.676459  669958 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0317 13:45:48.676507  669958 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0317 13:45:48.869708  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0317 13:45:48.878758  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0317 13:45:48.882886  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0317 13:45:48.884141  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0317 13:45:48.897585  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0317 13:45:48.901330  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0317 13:45:48.904357  669958 cache.go:162] opening:  /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0317 13:45:48.958262  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0317 13:45:48.958285  669958 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 283.476485ms
	I0317 13:45:48.958302  669958 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0317 13:45:49.376357  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0317 13:45:49.376394  669958 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 701.637092ms
	I0317 13:45:49.376410  669958 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0317 13:45:50.321335  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0317 13:45:50.321372  669958 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 1.646499808s
	I0317 13:45:50.321388  669958 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0317 13:45:50.335171  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0317 13:45:50.335201  669958 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 1.660398322s
	I0317 13:45:50.335214  669958 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0317 13:45:50.504238  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0317 13:45:50.504272  669958 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 1.829438493s
	I0317 13:45:50.504293  669958 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0317 13:45:50.512231  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0317 13:45:50.512264  669958 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.837428375s
	I0317 13:45:50.512278  669958 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0317 13:45:50.941969  669958 cache.go:157] /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0317 13:45:50.942000  669958 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.267223021s
	I0317 13:45:50.942011  669958 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0317 13:45:50.942028  669958 cache.go:87] Successfully saved all images to host disk.
	I0317 13:45:55.484187  669506 start.go:364] duration metric: took 33.800753613s to acquireMachinesLock for "kubernetes-upgrade-312638"
	I0317 13:45:55.484267  669506 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:45:55.484279  669506 fix.go:54] fixHost starting: 
	I0317 13:45:55.484797  669506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:45:55.484855  669506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:45:55.502960  669506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0317 13:45:55.503398  669506 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:45:55.503870  669506 main.go:141] libmachine: Using API Version  1
	I0317 13:45:55.503897  669506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:45:55.504225  669506 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:45:55.504431  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:45:55.504571  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetState
	I0317 13:45:55.506290  669506 fix.go:112] recreateIfNeeded on kubernetes-upgrade-312638: state=Running err=<nil>
	W0317 13:45:55.506309  669506 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:45:55.508478  669506 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-312638" VM ...
	I0317 13:45:55.509593  669506 machine.go:93] provisionDockerMachine start ...
	I0317 13:45:55.509624  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:45:55.509823  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:55.512587  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.513011  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:55.513042  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.513277  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:45:55.513471  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.513630  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.513763  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:45:55.513955  669506 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.514169  669506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:45:55.514183  669506 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:45:55.623731  669506 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-312638
	
	I0317 13:45:55.623773  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:45:55.624079  669506 buildroot.go:166] provisioning hostname "kubernetes-upgrade-312638"
	I0317 13:45:55.624116  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:45:55.624339  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:55.627346  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.627852  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:55.627893  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.628054  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:45:55.628252  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.628466  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.628648  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:45:55.628857  669506 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.629072  669506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:45:55.629084  669506 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-312638 && echo "kubernetes-upgrade-312638" | sudo tee /etc/hostname
	I0317 13:45:55.749586  669506 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-312638
	
	I0317 13:45:55.749626  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:55.752789  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.753133  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:55.753161  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.753392  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:45:55.753576  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.753723  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:55.753838  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:45:55.753998  669506 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.754247  669506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:45:55.754263  669506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-312638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-312638/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-312638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:45:55.865228  669506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:45:55.865264  669506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:45:55.865323  669506 buildroot.go:174] setting up certificates
	I0317 13:45:55.865340  669506 provision.go:84] configureAuth start
	I0317 13:45:55.865359  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetMachineName
	I0317 13:45:55.865703  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:45:55.868638  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.868963  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:55.868980  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.869132  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:55.871506  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.871910  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:55.871939  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:55.872029  669506 provision.go:143] copyHostCerts
	I0317 13:45:55.872087  669506 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:45:55.872105  669506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:45:55.872161  669506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:45:55.872258  669506 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:45:55.872267  669506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:45:55.872286  669506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:45:55.872358  669506 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:45:55.872365  669506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:45:55.872384  669506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:45:55.872445  669506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-312638 san=[127.0.0.1 192.168.50.55 kubernetes-upgrade-312638 localhost minikube]
	I0317 13:45:55.998367  669506 provision.go:177] copyRemoteCerts
	I0317 13:45:55.998428  669506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:45:55.998456  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:56.001602  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:56.001956  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:56.001989  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:56.002172  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:45:56.002387  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:56.002588  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:45:56.002735  669506 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:45:56.089269  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:45:56.116113  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:45:56.143558  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0317 13:45:56.172339  669506 provision.go:87] duration metric: took 306.974972ms to configureAuth
	I0317 13:45:56.172377  669506 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:45:56.172602  669506 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:56.172710  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:45:56.175975  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:56.176269  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:45:56.176307  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:45:56.176495  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:45:56.176646  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:56.176826  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:45:56.176994  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:45:56.177140  669506 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:56.177344  669506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:45:56.177357  669506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:45:54.250106  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Getting to WaitForSSH function...
	I0317 13:45:54.252386  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.252741  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.252766  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.252960  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH client type: external
	I0317 13:45:54.252982  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa (-rw-------)
	I0317 13:45:54.253004  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:45:54.253015  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | About to run SSH command:
	I0317 13:45:54.253038  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | exit 0
	I0317 13:45:54.375409  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | SSH cmd err, output: <nil>: 
	I0317 13:45:54.375718  669182 main.go:141] libmachine: (old-k8s-version-803027) KVM machine creation complete
	I0317 13:45:54.376028  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:45:54.376621  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:54.376839  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:54.377008  669182 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:45:54.377020  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetState
	I0317 13:45:54.378269  669182 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:45:54.378281  669182 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:45:54.378291  669182 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:45:54.378301  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.380591  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.380959  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.380981  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.381135  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.381316  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.381479  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.381623  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.381788  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.382014  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.382026  669182 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:45:54.478534  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:45:54.478569  669182 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:45:54.478586  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.481535  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.481803  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.481828  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.481974  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.482139  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.482365  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.482555  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.482721  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.482920  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.482957  669182 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:45:54.583789  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:45:54.583877  669182 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:45:54.583884  669182 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:45:54.583898  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.584188  669182 buildroot.go:166] provisioning hostname "old-k8s-version-803027"
	I0317 13:45:54.584220  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.584422  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.586680  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.587143  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.587180  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.587367  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.587557  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.587735  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.587903  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.588106  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.588333  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.588345  669182 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-803027 && echo "old-k8s-version-803027" | sudo tee /etc/hostname
	I0317 13:45:54.700090  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-803027
	
	I0317 13:45:54.700122  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.702728  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.703104  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.703134  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.703311  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.703545  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.703679  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.703837  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.703976  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.704214  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.704237  669182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-803027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-803027/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-803027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:45:54.811761  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:45:54.811794  669182 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:45:54.811837  669182 buildroot.go:174] setting up certificates
	I0317 13:45:54.811850  669182 provision.go:84] configureAuth start
	I0317 13:45:54.811864  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.812197  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:54.814656  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.815055  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.815087  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.815247  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.817348  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.817647  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.817673  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.817786  669182 provision.go:143] copyHostCerts
	I0317 13:45:54.817856  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:45:54.817874  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:45:54.817944  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:45:54.818074  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:45:54.818085  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:45:54.818109  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:45:54.818184  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:45:54.818192  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:45:54.818210  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:45:54.818266  669182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-803027 san=[127.0.0.1 192.168.61.229 localhost minikube old-k8s-version-803027]
	I0317 13:45:54.889126  669182 provision.go:177] copyRemoteCerts
	I0317 13:45:54.889186  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:45:54.889221  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.891953  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.892232  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.892264  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.892474  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.892699  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.892887  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.893025  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:54.973309  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:45:54.996085  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0317 13:45:55.018040  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:45:55.040573  669182 provision.go:87] duration metric: took 228.70201ms to configureAuth
	I0317 13:45:55.040619  669182 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:45:55.040824  669182 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:45:55.040917  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.043628  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.043972  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.044002  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.044188  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.044433  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.044632  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.044791  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.044972  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.045171  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:55.045187  669182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:45:55.255485  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:45:55.255516  669182 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:45:55.255526  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetURL
	I0317 13:45:55.256899  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | using libvirt version 6000000
	I0317 13:45:55.258951  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.259226  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.259255  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.259416  669182 main.go:141] libmachine: Docker is up and running!
	I0317 13:45:55.259435  669182 main.go:141] libmachine: Reticulating splines...
	I0317 13:45:55.259443  669182 client.go:171] duration metric: took 25.612919221s to LocalClient.Create
	I0317 13:45:55.259477  669182 start.go:167] duration metric: took 25.612991301s to libmachine.API.Create "old-k8s-version-803027"
	I0317 13:45:55.259495  669182 start.go:293] postStartSetup for "old-k8s-version-803027" (driver="kvm2")
	I0317 13:45:55.259508  669182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:45:55.259557  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.259777  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:45:55.259809  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.261641  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.261966  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.261990  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.262117  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.262271  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.262461  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.262622  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.341414  669182 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:45:55.345240  669182 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:45:55.345266  669182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:45:55.345329  669182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:45:55.345398  669182 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:45:55.345551  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:45:55.354548  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:45:55.376343  669182 start.go:296] duration metric: took 116.829514ms for postStartSetup
	I0317 13:45:55.376407  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:45:55.377042  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:55.379611  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.379943  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.379964  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.380341  669182 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json ...
	I0317 13:45:55.380546  669182 start.go:128] duration metric: took 25.755967347s to createHost
	I0317 13:45:55.380570  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.382921  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.383231  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.383262  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.383453  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.383646  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.383814  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.383933  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.384090  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.384328  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:55.384340  669182 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:45:55.484029  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219155.461524795
	
	I0317 13:45:55.484057  669182 fix.go:216] guest clock: 1742219155.461524795
	I0317 13:45:55.484068  669182 fix.go:229] Guest: 2025-03-17 13:45:55.461524795 +0000 UTC Remote: 2025-03-17 13:45:55.380556744 +0000 UTC m=+52.061335954 (delta=80.968051ms)
	I0317 13:45:55.484095  669182 fix.go:200] guest clock delta is within tolerance: 80.968051ms
	I0317 13:45:55.484100  669182 start.go:83] releasing machines lock for "old-k8s-version-803027", held for 25.859714629s
	I0317 13:45:55.484136  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.484453  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:55.487346  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.487796  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.487834  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.488025  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488573  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488782  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488845  669182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:45:55.488902  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.489040  669182 ssh_runner.go:195] Run: cat /version.json
	I0317 13:45:55.489068  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.491717  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492024  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492096  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.492121  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492291  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.492441  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.492486  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.492513  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492636  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.492721  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.492780  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.492860  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.492992  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.493143  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.569642  669182 ssh_runner.go:195] Run: systemctl --version
	I0317 13:45:55.590175  669182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:45:55.747427  669182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:45:55.755146  669182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:45:55.755212  669182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:45:55.771027  669182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:45:55.771052  669182 start.go:495] detecting cgroup driver to use...
	I0317 13:45:55.771121  669182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:45:55.787897  669182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:45:55.800319  669182 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:45:55.800381  669182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:45:55.813337  669182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:45:55.825564  669182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:45:55.936983  669182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:45:56.101386  669182 docker.go:233] disabling docker service ...
	I0317 13:45:56.101467  669182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:45:56.118628  669182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:45:56.132886  669182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:45:56.269977  669182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:45:56.395983  669182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:45:56.409048  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:45:56.426036  669182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0317 13:45:56.426119  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.436248  669182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:45:56.436311  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.445986  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.456227  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.465798  669182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:45:56.475830  669182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:45:56.484444  669182 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:45:56.484524  669182 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:45:56.496073  669182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:45:56.504879  669182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:45:56.625040  669182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:45:56.713631  669182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:45:56.713716  669182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:45:56.718009  669182 start.go:563] Will wait 60s for crictl version
	I0317 13:45:56.718073  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:45:56.721492  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:45:56.754725  669182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:45:56.754802  669182 ssh_runner.go:195] Run: crio --version
	I0317 13:45:56.780089  669182 ssh_runner.go:195] Run: crio --version
	I0317 13:45:56.809423  669182 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0317 13:45:56.810779  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:56.813403  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:56.813706  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:56.813742  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:56.813952  669182 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0317 13:45:56.818027  669182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:45:56.830111  669182 kubeadm.go:883] updating cluster {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:45:56.830229  669182 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:45:56.830283  669182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:45:56.862558  669182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:45:56.862640  669182 ssh_runner.go:195] Run: which lz4
	I0317 13:45:56.866543  669182 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:45:56.870714  669182 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:45:56.870754  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0317 13:45:58.202222  669182 crio.go:462] duration metric: took 1.335699881s to copy over tarball
	I0317 13:45:58.202304  669182 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:46:02.380180  669958 start.go:364] duration metric: took 13.705236273s to acquireMachinesLock for "no-preload-142429"
	I0317 13:46:02.380269  669958 start.go:93] Provisioning new machine with config: &{Name:no-preload-142429 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:no-preload-142429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:46:02.380365  669958 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:46:00.612590  669182 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.410253696s)
	I0317 13:46:00.612628  669182 crio.go:469] duration metric: took 2.410371799s to extract the tarball
	I0317 13:46:00.612638  669182 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:46:00.654075  669182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:46:00.698256  669182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:46:00.698287  669182 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 13:46:00.698342  669182 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:00.698357  669182 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.698420  669182 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0317 13:46:00.698433  669182 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.698448  669182 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.698456  669182 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.698441  669182 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.698418  669182 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.699748  669182 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.699810  669182 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.699829  669182 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.699753  669182 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.699974  669182 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.699982  669182 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0317 13:46:00.699996  669182 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:00.699999  669182 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.847096  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.850522  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.853558  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.853734  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.871165  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.888266  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0317 13:46:00.894171  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.941720  669182 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0317 13:46:00.941785  669182 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.941840  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.950196  669182 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0317 13:46:00.950259  669182 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.950316  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.967887  669182 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0317 13:46:00.967950  669182 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.967986  669182 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0317 13:46:00.968006  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.968025  669182 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.968078  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.997764  669182 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0317 13:46:00.997815  669182 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.997872  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.012339  669182 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0317 13:46:01.012391  669182 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0317 13:46:01.012439  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.013578  669182 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0317 13:46:01.013603  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.013619  669182 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.013651  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.013654  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.013722  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.013657  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.013730  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.021526  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.105636  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.130992  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.131047  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.131083  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.131047  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.131131  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.145563  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.213442  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.290581  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.290690  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.290713  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.290782  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.290796  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.290852  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.317298  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0317 13:46:01.413782  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.421371  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0317 13:46:01.421396  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0317 13:46:01.426911  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0317 13:46:01.433554  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0317 13:46:01.433591  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0317 13:46:01.459824  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0317 13:46:02.410062  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:02.557273  669182 cache_images.go:92] duration metric: took 1.858965497s to LoadCachedImages
	W0317 13:46:02.557370  669182 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0317 13:46:02.557387  669182 kubeadm.go:934] updating node { 192.168.61.229 8443 v1.20.0 crio true true} ...
	I0317 13:46:02.557499  669182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-803027 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:46:02.557577  669182 ssh_runner.go:195] Run: crio config
	I0317 13:46:02.608529  669182 cni.go:84] Creating CNI manager for ""
	I0317 13:46:02.608557  669182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:46:02.608568  669182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:46:02.608586  669182 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-803027 NodeName:old-k8s-version-803027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0317 13:46:02.608699  669182 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-803027"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:46:02.608760  669182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0317 13:46:02.618979  669182 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:46:02.619041  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:46:02.628619  669182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0317 13:46:02.644430  669182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:46:02.663061  669182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0317 13:46:02.682741  669182 ssh_runner.go:195] Run: grep 192.168.61.229	control-plane.minikube.internal$ /etc/hosts
	I0317 13:46:02.688095  669182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:46:02.706123  669182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:46:02.860760  669182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:46:02.877957  669182 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027 for IP: 192.168.61.229
	I0317 13:46:02.877991  669182 certs.go:194] generating shared ca certs ...
	I0317 13:46:02.878015  669182 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.878212  669182 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:46:02.878276  669182 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:46:02.878290  669182 certs.go:256] generating profile certs ...
	I0317 13:46:02.878371  669182 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key
	I0317 13:46:02.878411  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt with IP's: []
	I0317 13:46:02.943760  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt ...
	I0317 13:46:02.943802  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt: {Name:mk65b93f15885e6dbfc5fe81f4825ede29af84ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.944020  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key ...
	I0317 13:46:02.944044  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key: {Name:mk8054b97714f5519489dbabc3adec69734611eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.944179  669182 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3
	I0317 13:46:02.944208  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.229]
	I0317 13:46:02.518165  669958 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 13:46:02.518396  669958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:46:02.518435  669958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:46:02.534042  669958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0317 13:46:02.534501  669958 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:46:02.535062  669958 main.go:141] libmachine: Using API Version  1
	I0317 13:46:02.535088  669958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:46:02.535455  669958 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:46:02.535676  669958 main.go:141] libmachine: (no-preload-142429) Calling .GetMachineName
	I0317 13:46:02.535824  669958 main.go:141] libmachine: (no-preload-142429) Calling .DriverName
	I0317 13:46:02.535970  669958 start.go:159] libmachine.API.Create for "no-preload-142429" (driver="kvm2")
	I0317 13:46:02.535997  669958 client.go:168] LocalClient.Create starting
	I0317 13:46:02.536034  669958 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:46:02.536077  669958 main.go:141] libmachine: Decoding PEM data...
	I0317 13:46:02.536101  669958 main.go:141] libmachine: Parsing certificate...
	I0317 13:46:02.536176  669958 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:46:02.536204  669958 main.go:141] libmachine: Decoding PEM data...
	I0317 13:46:02.536223  669958 main.go:141] libmachine: Parsing certificate...
	I0317 13:46:02.536248  669958 main.go:141] libmachine: Running pre-create checks...
	I0317 13:46:02.536262  669958 main.go:141] libmachine: (no-preload-142429) Calling .PreCreateCheck
	I0317 13:46:02.536555  669958 main.go:141] libmachine: (no-preload-142429) Calling .GetConfigRaw
	I0317 13:46:02.536919  669958 main.go:141] libmachine: Creating machine...
	I0317 13:46:02.536935  669958 main.go:141] libmachine: (no-preload-142429) Calling .Create
	I0317 13:46:02.537042  669958 main.go:141] libmachine: (no-preload-142429) creating KVM machine...
	I0317 13:46:02.537073  669958 main.go:141] libmachine: (no-preload-142429) creating network...
	I0317 13:46:02.538300  669958 main.go:141] libmachine: (no-preload-142429) DBG | found existing default KVM network
	I0317 13:46:02.539813  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:02.539664  670085 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013dd0}
	I0317 13:46:02.539846  669958 main.go:141] libmachine: (no-preload-142429) DBG | created network xml: 
	I0317 13:46:02.539872  669958 main.go:141] libmachine: (no-preload-142429) DBG | <network>
	I0317 13:46:02.539902  669958 main.go:141] libmachine: (no-preload-142429) DBG |   <name>mk-no-preload-142429</name>
	I0317 13:46:02.539937  669958 main.go:141] libmachine: (no-preload-142429) DBG |   <dns enable='no'/>
	I0317 13:46:02.539951  669958 main.go:141] libmachine: (no-preload-142429) DBG |   
	I0317 13:46:02.539960  669958 main.go:141] libmachine: (no-preload-142429) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0317 13:46:02.539972  669958 main.go:141] libmachine: (no-preload-142429) DBG |     <dhcp>
	I0317 13:46:02.539983  669958 main.go:141] libmachine: (no-preload-142429) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0317 13:46:02.539996  669958 main.go:141] libmachine: (no-preload-142429) DBG |     </dhcp>
	I0317 13:46:02.540009  669958 main.go:141] libmachine: (no-preload-142429) DBG |   </ip>
	I0317 13:46:02.540048  669958 main.go:141] libmachine: (no-preload-142429) DBG |   
	I0317 13:46:02.540071  669958 main.go:141] libmachine: (no-preload-142429) DBG | </network>
	I0317 13:46:02.540083  669958 main.go:141] libmachine: (no-preload-142429) DBG | 
	I0317 13:46:02.673431  669958 main.go:141] libmachine: (no-preload-142429) DBG | trying to create private KVM network mk-no-preload-142429 192.168.39.0/24...
	I0317 13:46:02.755152  669958 main.go:141] libmachine: (no-preload-142429) DBG | private KVM network mk-no-preload-142429 192.168.39.0/24 created
	I0317 13:46:02.755209  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:02.755107  670085 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:46:02.755237  669958 main.go:141] libmachine: (no-preload-142429) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429 ...
	I0317 13:46:02.755258  669958 main.go:141] libmachine: (no-preload-142429) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:46:02.755281  669958 main.go:141] libmachine: (no-preload-142429) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:46:03.045515  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:03.045392  670085 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429/id_rsa...
	I0317 13:46:03.421135  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:03.421019  670085 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429/no-preload-142429.rawdisk...
	I0317 13:46:03.421166  669958 main.go:141] libmachine: (no-preload-142429) DBG | Writing magic tar header
	I0317 13:46:03.421183  669958 main.go:141] libmachine: (no-preload-142429) DBG | Writing SSH key tar header
	I0317 13:46:03.421195  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:03.421139  670085 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429 ...
	I0317 13:46:03.456950  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429 (perms=drwx------)
	I0317 13:46:03.456978  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429
	I0317 13:46:03.456989  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:46:03.457005  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:46:03.457014  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:46:03.457026  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:46:03.457050  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:46:03.457063  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:46:03.457073  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:46:03.457093  669958 main.go:141] libmachine: (no-preload-142429) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:46:03.457104  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:46:03.457115  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home/jenkins
	I0317 13:46:03.457122  669958 main.go:141] libmachine: (no-preload-142429) DBG | checking permissions on dir: /home
	I0317 13:46:03.457134  669958 main.go:141] libmachine: (no-preload-142429) DBG | skipping /home - not owner
	I0317 13:46:03.457142  669958 main.go:141] libmachine: (no-preload-142429) creating domain...
	I0317 13:46:03.458470  669958 main.go:141] libmachine: (no-preload-142429) define libvirt domain using xml: 
	I0317 13:46:03.458492  669958 main.go:141] libmachine: (no-preload-142429) <domain type='kvm'>
	I0317 13:46:03.458503  669958 main.go:141] libmachine: (no-preload-142429)   <name>no-preload-142429</name>
	I0317 13:46:03.458511  669958 main.go:141] libmachine: (no-preload-142429)   <memory unit='MiB'>2200</memory>
	I0317 13:46:03.458520  669958 main.go:141] libmachine: (no-preload-142429)   <vcpu>2</vcpu>
	I0317 13:46:03.458529  669958 main.go:141] libmachine: (no-preload-142429)   <features>
	I0317 13:46:03.458537  669958 main.go:141] libmachine: (no-preload-142429)     <acpi/>
	I0317 13:46:03.458543  669958 main.go:141] libmachine: (no-preload-142429)     <apic/>
	I0317 13:46:03.458561  669958 main.go:141] libmachine: (no-preload-142429)     <pae/>
	I0317 13:46:03.458570  669958 main.go:141] libmachine: (no-preload-142429)     
	I0317 13:46:03.458578  669958 main.go:141] libmachine: (no-preload-142429)   </features>
	I0317 13:46:03.458585  669958 main.go:141] libmachine: (no-preload-142429)   <cpu mode='host-passthrough'>
	I0317 13:46:03.458595  669958 main.go:141] libmachine: (no-preload-142429)   
	I0317 13:46:03.458601  669958 main.go:141] libmachine: (no-preload-142429)   </cpu>
	I0317 13:46:03.458610  669958 main.go:141] libmachine: (no-preload-142429)   <os>
	I0317 13:46:03.458619  669958 main.go:141] libmachine: (no-preload-142429)     <type>hvm</type>
	I0317 13:46:03.458627  669958 main.go:141] libmachine: (no-preload-142429)     <boot dev='cdrom'/>
	I0317 13:46:03.458636  669958 main.go:141] libmachine: (no-preload-142429)     <boot dev='hd'/>
	I0317 13:46:03.458645  669958 main.go:141] libmachine: (no-preload-142429)     <bootmenu enable='no'/>
	I0317 13:46:03.458653  669958 main.go:141] libmachine: (no-preload-142429)   </os>
	I0317 13:46:03.458661  669958 main.go:141] libmachine: (no-preload-142429)   <devices>
	I0317 13:46:03.458671  669958 main.go:141] libmachine: (no-preload-142429)     <disk type='file' device='cdrom'>
	I0317 13:46:03.458688  669958 main.go:141] libmachine: (no-preload-142429)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429/boot2docker.iso'/>
	I0317 13:46:03.458698  669958 main.go:141] libmachine: (no-preload-142429)       <target dev='hdc' bus='scsi'/>
	I0317 13:46:03.458707  669958 main.go:141] libmachine: (no-preload-142429)       <readonly/>
	I0317 13:46:03.458716  669958 main.go:141] libmachine: (no-preload-142429)     </disk>
	I0317 13:46:03.458725  669958 main.go:141] libmachine: (no-preload-142429)     <disk type='file' device='disk'>
	I0317 13:46:03.458734  669958 main.go:141] libmachine: (no-preload-142429)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:46:03.458750  669958 main.go:141] libmachine: (no-preload-142429)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/no-preload-142429/no-preload-142429.rawdisk'/>
	I0317 13:46:03.458760  669958 main.go:141] libmachine: (no-preload-142429)       <target dev='hda' bus='virtio'/>
	I0317 13:46:03.458768  669958 main.go:141] libmachine: (no-preload-142429)     </disk>
	I0317 13:46:03.458778  669958 main.go:141] libmachine: (no-preload-142429)     <interface type='network'>
	I0317 13:46:03.458787  669958 main.go:141] libmachine: (no-preload-142429)       <source network='mk-no-preload-142429'/>
	I0317 13:46:03.458797  669958 main.go:141] libmachine: (no-preload-142429)       <model type='virtio'/>
	I0317 13:46:03.458805  669958 main.go:141] libmachine: (no-preload-142429)     </interface>
	I0317 13:46:03.458815  669958 main.go:141] libmachine: (no-preload-142429)     <interface type='network'>
	I0317 13:46:03.458823  669958 main.go:141] libmachine: (no-preload-142429)       <source network='default'/>
	I0317 13:46:03.458833  669958 main.go:141] libmachine: (no-preload-142429)       <model type='virtio'/>
	I0317 13:46:03.458841  669958 main.go:141] libmachine: (no-preload-142429)     </interface>
	I0317 13:46:03.458850  669958 main.go:141] libmachine: (no-preload-142429)     <serial type='pty'>
	I0317 13:46:03.458859  669958 main.go:141] libmachine: (no-preload-142429)       <target port='0'/>
	I0317 13:46:03.458867  669958 main.go:141] libmachine: (no-preload-142429)     </serial>
	I0317 13:46:03.458875  669958 main.go:141] libmachine: (no-preload-142429)     <console type='pty'>
	I0317 13:46:03.458886  669958 main.go:141] libmachine: (no-preload-142429)       <target type='serial' port='0'/>
	I0317 13:46:03.458893  669958 main.go:141] libmachine: (no-preload-142429)     </console>
	I0317 13:46:03.458900  669958 main.go:141] libmachine: (no-preload-142429)     <rng model='virtio'>
	I0317 13:46:03.458909  669958 main.go:141] libmachine: (no-preload-142429)       <backend model='random'>/dev/random</backend>
	I0317 13:46:03.458915  669958 main.go:141] libmachine: (no-preload-142429)     </rng>
	I0317 13:46:03.458921  669958 main.go:141] libmachine: (no-preload-142429)     
	I0317 13:46:03.458930  669958 main.go:141] libmachine: (no-preload-142429)     
	I0317 13:46:03.458940  669958 main.go:141] libmachine: (no-preload-142429)   </devices>
	I0317 13:46:03.458945  669958 main.go:141] libmachine: (no-preload-142429) </domain>
	I0317 13:46:03.458955  669958 main.go:141] libmachine: (no-preload-142429) 
	I0317 13:46:03.417250  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 ...
	I0317 13:46:03.417287  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3: {Name:mk571d80349a8579bd389bfe3a89f496b4f4b4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.456815  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3 ...
	I0317 13:46:03.456863  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3: {Name:mk92ffa0b12a6ea74b4fe2acb8062b7b3ddfb45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.457031  669182 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt
	I0317 13:46:03.457146  669182 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key
	I0317 13:46:03.457235  669182 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key
	I0317 13:46:03.457259  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt with IP's: []
	I0317 13:46:03.535835  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt ...
	I0317 13:46:03.535865  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt: {Name:mk9430ddd69712bd5f3dd62ef4266a5b3bbca50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.591292  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key ...
	I0317 13:46:03.591343  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key: {Name:mkf70d4c286306fd785f19ecee89372f3d7ee79c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.591635  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:46:03.591687  669182 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:46:03.591702  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:46:03.591733  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:46:03.591764  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:46:03.591801  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:46:03.591854  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:46:03.592506  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:46:03.621945  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:46:03.646073  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:46:03.671158  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:46:03.695266  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:46:03.722011  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:46:03.753431  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:46:03.785230  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:46:03.820225  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:46:03.854067  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:46:03.878551  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:46:03.905438  669182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:46:03.924750  669182 ssh_runner.go:195] Run: openssl version
	I0317 13:46:03.930579  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:46:03.941686  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.946192  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.946259  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.952065  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:46:03.962713  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:46:03.973492  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.979228  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.979299  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.984833  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:46:03.995609  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:46:04.006779  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.011688  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.011757  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.019347  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:46:04.030095  669182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:46:04.035429  669182 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:46:04.035484  669182 kubeadm.go:392] StartCluster: {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:46:04.035615  669182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:46:04.035675  669182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:46:04.080232  669182 cri.go:89] found id: ""
	I0317 13:46:04.080306  669182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:46:04.093474  669182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:46:04.103662  669182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:46:04.113196  669182 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:46:04.113216  669182 kubeadm.go:157] found existing configuration files:
	
	I0317 13:46:04.113265  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:46:04.122656  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:46:04.122730  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:46:04.132492  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:46:04.141395  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:46:04.141476  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:46:04.150576  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:46:04.159202  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:46:04.159285  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:46:04.169461  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:46:04.178572  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:46:04.178638  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:46:04.188061  669182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:46:04.290394  669182 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:46:04.290484  669182 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:46:04.434696  669182 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:46:04.434914  669182 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:46:04.435058  669182 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:46:04.612260  669182 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:46:02.141930  669506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:46:02.141963  669506 machine.go:96] duration metric: took 6.632344975s to provisionDockerMachine
	I0317 13:46:02.141976  669506 start.go:293] postStartSetup for "kubernetes-upgrade-312638" (driver="kvm2")
	I0317 13:46:02.141992  669506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:46:02.142014  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:46:02.142391  669506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:46:02.142419  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:46:02.144868  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.145196  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:02.145225  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.145418  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:46:02.145653  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:46:02.145855  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:46:02.146045  669506 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:46:02.229290  669506 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:46:02.233002  669506 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:46:02.233026  669506 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:46:02.233096  669506 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:46:02.233181  669506 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:46:02.233305  669506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:46:02.241602  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:46:02.264055  669506 start.go:296] duration metric: took 122.063201ms for postStartSetup
	I0317 13:46:02.264094  669506 fix.go:56] duration metric: took 6.779815513s for fixHost
	I0317 13:46:02.264116  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:46:02.267085  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.267471  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:02.267502  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.267733  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:46:02.267957  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:46:02.268154  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:46:02.268334  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:46:02.268478  669506 main.go:141] libmachine: Using SSH client type: native
	I0317 13:46:02.268751  669506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.55 22 <nil> <nil>}
	I0317 13:46:02.268766  669506 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:46:02.379993  669506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219162.359196187
	
	I0317 13:46:02.380022  669506 fix.go:216] guest clock: 1742219162.359196187
	I0317 13:46:02.380030  669506 fix.go:229] Guest: 2025-03-17 13:46:02.359196187 +0000 UTC Remote: 2025-03-17 13:46:02.26409782 +0000 UTC m=+40.729882481 (delta=95.098367ms)
	I0317 13:46:02.380051  669506 fix.go:200] guest clock delta is within tolerance: 95.098367ms
	I0317 13:46:02.380056  669506 start.go:83] releasing machines lock for "kubernetes-upgrade-312638", held for 6.895827857s
	I0317 13:46:02.380083  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:46:02.380401  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:46:02.383372  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.383799  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:02.383826  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.384000  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:46:02.384501  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:46:02.384694  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:46:02.384803  669506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:46:02.384853  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:46:02.384916  669506 ssh_runner.go:195] Run: cat /version.json
	I0317 13:46:02.384944  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHHostname
	I0317 13:46:02.387331  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.387626  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:02.387670  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.387695  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.387922  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:46:02.388061  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:02.388084  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:02.388105  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:46:02.388268  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHPort
	I0317 13:46:02.388284  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:46:02.388448  669506 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:46:02.388506  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHKeyPath
	I0317 13:46:02.388667  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetSSHUsername
	I0317 13:46:02.388877  669506 sshutil.go:53] new ssh client: &{IP:192.168.50.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/kubernetes-upgrade-312638/id_rsa Username:docker}
	I0317 13:46:02.488953  669506 ssh_runner.go:195] Run: systemctl --version
	I0317 13:46:02.495135  669506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:46:02.650453  669506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:46:02.658486  669506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:46:02.658574  669506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:46:02.667350  669506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0317 13:46:02.667379  669506 start.go:495] detecting cgroup driver to use...
	I0317 13:46:02.667455  669506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:46:02.697923  669506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:46:02.713717  669506 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:46:02.713787  669506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:46:02.730380  669506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:46:02.746309  669506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:46:02.909464  669506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:46:03.103831  669506 docker.go:233] disabling docker service ...
	I0317 13:46:03.103922  669506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:46:03.130617  669506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:46:03.159118  669506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:46:03.325745  669506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:46:03.504641  669506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:46:03.519922  669506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:46:03.538626  669506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:46:03.538691  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.548603  669506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:46:03.548660  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.558439  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.568029  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.577625  669506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:46:03.587353  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.598266  669506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.610837  669506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:46:03.624547  669506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:46:03.634393  669506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:46:03.644823  669506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:46:03.847403  669506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:46:05.897516  669506 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.050066348s)
	I0317 13:46:05.897554  669506 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:46:05.897615  669506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:46:05.904711  669506 start.go:563] Will wait 60s for crictl version
	I0317 13:46:05.904768  669506 ssh_runner.go:195] Run: which crictl
	I0317 13:46:05.909125  669506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:46:05.950079  669506 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:46:05.950182  669506 ssh_runner.go:195] Run: crio --version
	I0317 13:46:05.978791  669506 ssh_runner.go:195] Run: crio --version
	I0317 13:46:06.009782  669506 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:46:06.010995  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetIP
	I0317 13:46:06.014323  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:06.014712  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:41", ip: ""} in network mk-kubernetes-upgrade-312638: {Iface:virbr2 ExpiryTime:2025-03-17 14:44:53 +0000 UTC Type:0 Mac:52:54:00:2a:ac:41 Iaid: IPaddr:192.168.50.55 Prefix:24 Hostname:kubernetes-upgrade-312638 Clientid:01:52:54:00:2a:ac:41}
	I0317 13:46:06.014744  669506 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined IP address 192.168.50.55 and MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:46:06.014946  669506 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0317 13:46:06.020479  669506 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:46:06.020585  669506 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:46:06.020624  669506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:46:06.067215  669506 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:46:06.067245  669506 crio.go:433] Images already preloaded, skipping extraction
	I0317 13:46:06.067297  669506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:46:06.108930  669506 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:46:06.108957  669506 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:46:06.108967  669506 kubeadm.go:934] updating node { 192.168.50.55 8443 v1.32.2 crio true true} ...
	I0317 13:46:06.109085  669506 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-312638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:46:06.109170  669506 ssh_runner.go:195] Run: crio config
	I0317 13:46:06.220198  669506 cni.go:84] Creating CNI manager for ""
	I0317 13:46:06.220297  669506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:46:06.220327  669506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:46:06.220380  669506 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.55 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-312638 NodeName:kubernetes-upgrade-312638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:46:06.220581  669506 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-312638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.55"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.55"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:46:06.220696  669506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:46:06.245454  669506 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:46:06.245551  669506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:46:06.338041  669506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0317 13:46:06.431888  669506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:46:06.474288  669506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0317 13:46:04.756445  669182 out.go:235]   - Generating certificates and keys ...
	I0317 13:46:04.756610  669182 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:46:04.756697  669182 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:46:04.756790  669182 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:46:04.856688  669182 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:46:05.165268  669182 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:46:05.310401  669182 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:46:05.439918  669182 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:46:05.440268  669182 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0317 13:46:05.571769  669182 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:46:05.572164  669182 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0317 13:46:05.701956  669182 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:46:05.960547  669182 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:46:06.073883  669182 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:46:06.074331  669182 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:46:06.182666  669182 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:46:06.338391  669182 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:46:06.961087  669182 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:46:07.087463  669182 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:46:07.107754  669182 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:46:07.111210  669182 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:46:07.111311  669182 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:46:07.295582  669182 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:46:07.297308  669182 out.go:235]   - Booting up control plane ...
	I0317 13:46:07.297457  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:46:07.303807  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:46:07.304846  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:46:07.305666  669182 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:46:07.310548  669182 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:46:03.636810  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:58:f8:90 in network default
	I0317 13:46:03.637407  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:03.637453  669958 main.go:141] libmachine: (no-preload-142429) starting domain...
	I0317 13:46:03.637478  669958 main.go:141] libmachine: (no-preload-142429) ensuring networks are active...
	I0317 13:46:03.638176  669958 main.go:141] libmachine: (no-preload-142429) Ensuring network default is active
	I0317 13:46:03.638575  669958 main.go:141] libmachine: (no-preload-142429) Ensuring network mk-no-preload-142429 is active
	I0317 13:46:03.639266  669958 main.go:141] libmachine: (no-preload-142429) getting domain XML...
	I0317 13:46:03.640133  669958 main.go:141] libmachine: (no-preload-142429) creating domain...
	I0317 13:46:05.783721  669958 main.go:141] libmachine: (no-preload-142429) waiting for IP...
	I0317 13:46:05.784565  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:05.785021  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:05.785047  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:05.785012  670085 retry.go:31] will retry after 197.888199ms: waiting for domain to come up
	I0317 13:46:05.984667  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:05.985184  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:05.985211  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:05.985150  670085 retry.go:31] will retry after 373.052023ms: waiting for domain to come up
	I0317 13:46:06.359661  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:06.360298  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:06.360330  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:06.360251  670085 retry.go:31] will retry after 436.910133ms: waiting for domain to come up
	I0317 13:46:06.799215  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:06.799822  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:06.799851  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:06.799784  670085 retry.go:31] will retry after 378.093836ms: waiting for domain to come up
	I0317 13:46:07.179418  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:07.179929  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:07.179960  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:07.179905  670085 retry.go:31] will retry after 536.546711ms: waiting for domain to come up
	I0317 13:46:07.718242  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:07.718838  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:07.718869  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:07.718810  670085 retry.go:31] will retry after 755.599412ms: waiting for domain to come up
	I0317 13:46:08.476102  669958 main.go:141] libmachine: (no-preload-142429) DBG | domain no-preload-142429 has defined MAC address 52:54:00:52:c2:e7 in network mk-no-preload-142429
	I0317 13:46:08.476738  669958 main.go:141] libmachine: (no-preload-142429) DBG | unable to find current IP address of domain no-preload-142429 in network mk-no-preload-142429
	I0317 13:46:08.476768  669958 main.go:141] libmachine: (no-preload-142429) DBG | I0317 13:46:08.476705  670085 retry.go:31] will retry after 1.080564137s: waiting for domain to come up
	I0317 13:46:06.634968  669506 ssh_runner.go:195] Run: grep 192.168.50.55	control-plane.minikube.internal$ /etc/hosts
	I0317 13:46:06.671860  669506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:46:06.956606  669506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:46:07.040892  669506 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638 for IP: 192.168.50.55
	I0317 13:46:07.040917  669506 certs.go:194] generating shared ca certs ...
	I0317 13:46:07.040938  669506 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:07.041126  669506 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:46:07.041177  669506 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:46:07.041189  669506 certs.go:256] generating profile certs ...
	I0317 13:46:07.041309  669506 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/client.key
	I0317 13:46:07.041367  669506 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key.ef68222a
	I0317 13:46:07.041412  669506 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key
	I0317 13:46:07.041561  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:46:07.041602  669506 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:46:07.041616  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:46:07.041648  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:46:07.041677  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:46:07.041706  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:46:07.041758  669506 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:46:07.042699  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:46:07.169205  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:46:07.479050  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:46:07.573442  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:46:07.614021  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:46:07.646300  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:46:07.714517  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:46:07.805802  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:46:07.862297  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:46:07.923263  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:46:07.963048  669506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:46:08.002845  669506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:46:08.028827  669506 ssh_runner.go:195] Run: openssl version
	I0317 13:46:08.037451  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:46:08.053330  669506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:46:08.059149  669506 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:46:08.059231  669506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:46:08.067411  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:46:08.087083  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:46:08.107936  669506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:08.115118  669506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:08.115192  669506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:08.126941  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:46:08.146293  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:46:08.159405  669506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:46:08.164427  669506 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:46:08.164512  669506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:46:08.172617  669506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:46:08.185019  669506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:46:08.192341  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:46:08.202147  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:46:08.209362  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:46:08.220369  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:46:08.228052  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:46:08.234743  669506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:46:08.242262  669506 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:46:08.242368  669506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:46:08.242502  669506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:46:08.305107  669506 cri.go:89] found id: "9e9c7189f0e5f96ba2540d4e107c1fa666153b47a74e15a40bf9b6857a6bad78"
	I0317 13:46:08.305150  669506 cri.go:89] found id: "9fc06cde5b1e0e6d2edb69840ae77bd07095ed25da5a283c3e5e5586bd87c320"
	I0317 13:46:08.305157  669506 cri.go:89] found id: "f8dc91a5d481c159f06db293206cc679b5cab37ad91bbe319011c5122be5d477"
	I0317 13:46:08.305163  669506 cri.go:89] found id: "f6995313d5a5bae79d3a21fb6e0a6cdbaa2f7e718836d3f6810ef58261998c5c"
	I0317 13:46:08.305168  669506 cri.go:89] found id: "f23cde61a90bd944349e7f8eabe7be5e9b9f60be4c6defea742d805a04de9bfc"
	I0317 13:46:08.305174  669506 cri.go:89] found id: "5ca391ea0134e9ad81f457a5019be37e1f147440e3f32406c53b8149301d2851"
	I0317 13:46:08.305178  669506 cri.go:89] found id: "6e9c3b842208be0ab0a27e162d76f8ebe3e1b43a434c714cb5a9a127b562df1b"
	I0317 13:46:08.305182  669506 cri.go:89] found id: "67d51172622131b180524a30ed50425f15751ae8764526867e2c5edf93d51d45"
	I0317 13:46:08.305186  669506 cri.go:89] found id: "f05db4362dd4ccb96a407580b59995b5b4795e2c3ea18dd651a137ac2760636d"
	I0317 13:46:08.305195  669506 cri.go:89] found id: "1bf123df0a15f8b63c9e3f47af652f39a8f34e37b3d5c9232c655f53b7bbf5c4"
	I0317 13:46:08.305199  669506 cri.go:89] found id: "0147c4c68bcb0d7c93f803a8e51c126bf38297f7d6e2a9336ebeb1925b6c8b35"
	I0317 13:46:08.305203  669506 cri.go:89] found id: "e16b40ce4590bb04d78f95b5a17c4b6a245bd4d7b40cbf74da191584eeb72d8f"
	I0317 13:46:08.305207  669506 cri.go:89] found id: "7394c5d573f8db45ad9d967afbc89c781a4606cca5ad7b6d1bddf7e2e184d1b7"
	I0317 13:46:08.305211  669506 cri.go:89] found id: "d9a6c38c4da4e7cc36865224187d8cf50975a82f62b87a69e23d46a7f4db0658"
	I0317 13:46:08.305216  669506 cri.go:89] found id: "7508630e05f3525dad929f28a08c106bc429f15537066cee58ade524c8ed6286"
	I0317 13:46:08.305223  669506 cri.go:89] found id: ""
	I0317 13:46:08.305284  669506 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-312638 -n kubernetes-upgrade-312638
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-312638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-312638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-312638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-312638: (1.172977711s)
--- FAIL: TestKubernetesUpgrade (435.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (77.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-880805 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-880805 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.32145918s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-880805] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-880805" primary control-plane node in "pause-880805" cluster
	* Updating the running kvm2 "pause-880805" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-880805" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:43:44.816659  667886 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:43:44.817206  667886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:43:44.817223  667886 out.go:358] Setting ErrFile to fd 2...
	I0317 13:43:44.817229  667886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:43:44.817706  667886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:43:44.818528  667886 out.go:352] Setting JSON to false
	I0317 13:43:44.819979  667886 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12369,"bootTime":1742206656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:43:44.820055  667886 start.go:139] virtualization: kvm guest
	I0317 13:43:44.821933  667886 out.go:177] * [pause-880805] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:43:44.823593  667886 notify.go:220] Checking for updates...
	I0317 13:43:44.823620  667886 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:43:44.825018  667886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:43:44.826472  667886 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:43:44.827784  667886 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:43:44.829055  667886 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:43:44.830143  667886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:43:44.831826  667886 config.go:182] Loaded profile config "pause-880805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:43:44.832451  667886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:43:44.832552  667886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:43:44.849435  667886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0317 13:43:44.850014  667886 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:43:44.850642  667886 main.go:141] libmachine: Using API Version  1
	I0317 13:43:44.850676  667886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:43:44.851093  667886 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:43:44.851331  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:43:44.851660  667886 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:43:44.852028  667886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:43:44.852087  667886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:43:44.867896  667886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0317 13:43:44.868468  667886 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:43:44.869495  667886 main.go:141] libmachine: Using API Version  1
	I0317 13:43:44.869519  667886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:43:44.871078  667886 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:43:44.871306  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:43:44.921413  667886 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 13:43:44.922738  667886 start.go:297] selected driver: kvm2
	I0317 13:43:44.922765  667886 start.go:901] validating driver "kvm2" against &{Name:pause-880805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-880805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:43:44.922968  667886 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:43:44.923514  667886 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:43:44.923737  667886 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:43:44.944040  667886 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:43:44.945150  667886 cni.go:84] Creating CNI manager for ""
	I0317 13:43:44.945211  667886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:43:44.945295  667886 start.go:340] cluster config:
	{Name:pause-880805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-880805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:43:44.945506  667886 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:43:44.947428  667886 out.go:177] * Starting "pause-880805" primary control-plane node in "pause-880805" cluster
	I0317 13:43:44.948805  667886 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:43:44.948858  667886 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:43:44.948871  667886 cache.go:56] Caching tarball of preloaded images
	I0317 13:43:44.948988  667886 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:43:44.949003  667886 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:43:44.949146  667886 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/config.json ...
	I0317 13:43:44.949412  667886 start.go:360] acquireMachinesLock for pause-880805: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:44:10.528131  667886 start.go:364] duration metric: took 25.578678973s to acquireMachinesLock for "pause-880805"
	I0317 13:44:10.528201  667886 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:44:10.528209  667886 fix.go:54] fixHost starting: 
	I0317 13:44:10.528613  667886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:10.528663  667886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:10.549714  667886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0317 13:44:10.550257  667886 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:10.550753  667886 main.go:141] libmachine: Using API Version  1
	I0317 13:44:10.550783  667886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:10.551187  667886 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:10.551400  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:10.551566  667886 main.go:141] libmachine: (pause-880805) Calling .GetState
	I0317 13:44:10.553340  667886 fix.go:112] recreateIfNeeded on pause-880805: state=Running err=<nil>
	W0317 13:44:10.553375  667886 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:44:10.555074  667886 out.go:177] * Updating the running kvm2 "pause-880805" VM ...
	I0317 13:44:10.556310  667886 machine.go:93] provisionDockerMachine start ...
	I0317 13:44:10.556336  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:10.556513  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:10.558999  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.559499  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:10.559547  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.559697  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:10.559870  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.560047  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.560199  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:10.560431  667886 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:10.560709  667886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0317 13:44:10.560727  667886 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:44:10.671643  667886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-880805
	
	I0317 13:44:10.671675  667886 main.go:141] libmachine: (pause-880805) Calling .GetMachineName
	I0317 13:44:10.671940  667886 buildroot.go:166] provisioning hostname "pause-880805"
	I0317 13:44:10.671976  667886 main.go:141] libmachine: (pause-880805) Calling .GetMachineName
	I0317 13:44:10.672170  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:10.675180  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.675685  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:10.675730  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.675897  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:10.676100  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.676281  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.676488  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:10.676728  667886 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:10.676965  667886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0317 13:44:10.676981  667886 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-880805 && echo "pause-880805" | sudo tee /etc/hostname
	I0317 13:44:10.801441  667886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-880805
	
	I0317 13:44:10.801475  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:10.804347  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.804803  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:10.804826  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.805057  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:10.805273  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.805449  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:10.805608  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:10.805775  667886 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:10.806027  667886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0317 13:44:10.806045  667886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-880805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-880805/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-880805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:44:10.921216  667886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:44:10.921262  667886 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:44:10.921323  667886 buildroot.go:174] setting up certificates
	I0317 13:44:10.921341  667886 provision.go:84] configureAuth start
	I0317 13:44:10.921365  667886 main.go:141] libmachine: (pause-880805) Calling .GetMachineName
	I0317 13:44:10.921728  667886 main.go:141] libmachine: (pause-880805) Calling .GetIP
	I0317 13:44:10.924633  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.925053  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:10.925081  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.925254  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:10.927965  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.928391  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:10.928430  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:10.928542  667886 provision.go:143] copyHostCerts
	I0317 13:44:10.928602  667886 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:44:10.928616  667886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:44:10.928682  667886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:44:10.928823  667886 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:44:10.928835  667886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:44:10.928866  667886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:44:10.928956  667886 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:44:10.928967  667886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:44:10.928996  667886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:44:10.929078  667886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.pause-880805 san=[127.0.0.1 192.168.39.171 localhost minikube pause-880805]
	I0317 13:44:11.134615  667886 provision.go:177] copyRemoteCerts
	I0317 13:44:11.134669  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:44:11.134699  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:11.137239  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:11.137517  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:11.137547  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:11.137674  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:11.137902  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:11.138037  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:11.138148  667886 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/pause-880805/id_rsa Username:docker}
	I0317 13:44:11.223764  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:44:11.246882  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:44:11.267750  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0317 13:44:11.289485  667886 provision.go:87] duration metric: took 368.126618ms to configureAuth
	I0317 13:44:11.289512  667886 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:44:11.289755  667886 config.go:182] Loaded profile config "pause-880805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:11.289837  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:11.292330  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:11.292698  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:11.292716  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:11.292912  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:11.293106  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:11.293292  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:11.293441  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:11.293600  667886 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:11.293834  667886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0317 13:44:11.293850  667886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:44:16.801399  667886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:44:16.801448  667886 machine.go:96] duration metric: took 6.245101931s to provisionDockerMachine
	I0317 13:44:16.801463  667886 start.go:293] postStartSetup for "pause-880805" (driver="kvm2")
	I0317 13:44:16.801478  667886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:44:16.801504  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:16.801923  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:44:16.801957  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:16.804807  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:16.805412  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:16.805440  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:16.805595  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:16.805808  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:16.806008  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:16.806177  667886 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/pause-880805/id_rsa Username:docker}
	I0317 13:44:16.894756  667886 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:44:16.898621  667886 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:44:16.898645  667886 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:44:16.898724  667886 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:44:16.898822  667886 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:44:16.898929  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:44:16.908338  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:16.932926  667886 start.go:296] duration metric: took 131.448062ms for postStartSetup
	I0317 13:44:16.932964  667886 fix.go:56] duration metric: took 6.404756118s for fixHost
	I0317 13:44:16.932990  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:16.935672  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:16.936105  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:16.936142  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:16.936275  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:16.936531  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:16.936699  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:16.936803  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:16.937005  667886 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:16.937242  667886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0317 13:44:16.937258  667886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:44:17.043833  667886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219057.022348979
	
	I0317 13:44:17.043860  667886 fix.go:216] guest clock: 1742219057.022348979
	I0317 13:44:17.043871  667886 fix.go:229] Guest: 2025-03-17 13:44:17.022348979 +0000 UTC Remote: 2025-03-17 13:44:16.932968565 +0000 UTC m=+32.155904937 (delta=89.380414ms)
	I0317 13:44:17.043894  667886 fix.go:200] guest clock delta is within tolerance: 89.380414ms
	I0317 13:44:17.043899  667886 start.go:83] releasing machines lock for "pause-880805", held for 6.515724755s
	I0317 13:44:17.043939  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:17.044306  667886 main.go:141] libmachine: (pause-880805) Calling .GetIP
	I0317 13:44:17.047081  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.047419  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:17.047448  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.047629  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:17.048247  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:17.048449  667886 main.go:141] libmachine: (pause-880805) Calling .DriverName
	I0317 13:44:17.048556  667886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:44:17.048595  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:17.048712  667886 ssh_runner.go:195] Run: cat /version.json
	I0317 13:44:17.048756  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHHostname
	I0317 13:44:17.051455  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.051499  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.051838  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:17.051873  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:17.051897  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.051927  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:17.052091  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:17.052235  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHPort
	I0317 13:44:17.052295  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:17.052411  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHKeyPath
	I0317 13:44:17.052436  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:17.052551  667886 main.go:141] libmachine: (pause-880805) Calling .GetSSHUsername
	I0317 13:44:17.052599  667886 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/pause-880805/id_rsa Username:docker}
	I0317 13:44:17.052689  667886 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/pause-880805/id_rsa Username:docker}
	I0317 13:44:17.151074  667886 ssh_runner.go:195] Run: systemctl --version
	I0317 13:44:17.158710  667886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:44:17.310482  667886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:44:17.315852  667886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:44:17.315930  667886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:44:17.325086  667886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0317 13:44:17.325107  667886 start.go:495] detecting cgroup driver to use...
	I0317 13:44:17.325166  667886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:44:17.340754  667886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:44:17.353550  667886 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:44:17.353614  667886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:44:17.371617  667886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:44:17.389999  667886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:44:17.543662  667886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:44:17.691348  667886 docker.go:233] disabling docker service ...
	I0317 13:44:17.691426  667886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:44:17.708645  667886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:44:17.722841  667886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:44:17.858015  667886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:44:18.008850  667886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:44:18.023268  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:44:18.043685  667886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:44:18.043740  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.054042  667886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:44:18.054139  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.064700  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.074064  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.083428  667886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:44:18.093154  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.102465  667886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.115037  667886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:18.124395  667886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:44:18.133086  667886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:44:18.141735  667886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:18.274299  667886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:44:19.310651  667886 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.03631182s)
	I0317 13:44:19.310693  667886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:44:19.310748  667886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:44:19.315434  667886 start.go:563] Will wait 60s for crictl version
	I0317 13:44:19.315491  667886 ssh_runner.go:195] Run: which crictl
	I0317 13:44:19.319192  667886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:44:19.353837  667886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:44:19.353929  667886 ssh_runner.go:195] Run: crio --version
	I0317 13:44:19.388931  667886 ssh_runner.go:195] Run: crio --version
	I0317 13:44:19.427313  667886 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:44:19.428564  667886 main.go:141] libmachine: (pause-880805) Calling .GetIP
	I0317 13:44:19.432539  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:19.432930  667886 main.go:141] libmachine: (pause-880805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:8e:5e", ip: ""} in network mk-pause-880805: {Iface:virbr1 ExpiryTime:2025-03-17 14:43:09 +0000 UTC Type:0 Mac:52:54:00:db:8e:5e Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:pause-880805 Clientid:01:52:54:00:db:8e:5e}
	I0317 13:44:19.432973  667886 main.go:141] libmachine: (pause-880805) DBG | domain pause-880805 has defined IP address 192.168.39.171 and MAC address 52:54:00:db:8e:5e in network mk-pause-880805
	I0317 13:44:19.433294  667886 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 13:44:19.439072  667886 kubeadm.go:883] updating cluster {Name:pause-880805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-880805 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:44:19.439272  667886 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:44:19.439328  667886 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:19.482898  667886 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:44:19.482930  667886 crio.go:433] Images already preloaded, skipping extraction
	I0317 13:44:19.482996  667886 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:19.514649  667886 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:44:19.514674  667886 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:44:19.514682  667886 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.32.2 crio true true} ...
	I0317 13:44:19.514774  667886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-880805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-880805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:44:19.514855  667886 ssh_runner.go:195] Run: crio config
	I0317 13:44:19.573353  667886 cni.go:84] Creating CNI manager for ""
	I0317 13:44:19.573378  667886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:19.573391  667886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:44:19.573416  667886 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-880805 NodeName:pause-880805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:44:19.573565  667886 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-880805"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.171"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:44:19.573637  667886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:44:19.583976  667886 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:44:19.584051  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:44:19.593162  667886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0317 13:44:19.610944  667886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:44:19.627732  667886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0317 13:44:19.644409  667886 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0317 13:44:19.648199  667886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:19.777027  667886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:19.796658  667886 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805 for IP: 192.168.39.171
	I0317 13:44:19.796698  667886 certs.go:194] generating shared ca certs ...
	I0317 13:44:19.796723  667886 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:19.796910  667886 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:44:19.796954  667886 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:44:19.796964  667886 certs.go:256] generating profile certs ...
	I0317 13:44:19.797044  667886 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/client.key
	I0317 13:44:19.797105  667886 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/apiserver.key.8a828697
	I0317 13:44:19.797152  667886 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/proxy-client.key
	I0317 13:44:19.797296  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:44:19.797328  667886 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:44:19.797339  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:44:19.797360  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:44:19.797381  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:44:19.797413  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:44:19.797467  667886 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:19.798478  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:44:19.827767  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:44:19.854710  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:44:19.881786  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:44:19.914617  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:44:19.946445  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:44:19.980824  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:44:20.006455  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/pause-880805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:44:20.032162  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:44:20.058683  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:44:20.084653  667886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:44:20.110403  667886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:44:20.125638  667886 ssh_runner.go:195] Run: openssl version
	I0317 13:44:20.136806  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:44:20.186038  667886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:44:20.197783  667886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:44:20.197861  667886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:44:20.224113  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:44:20.259738  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:44:20.343863  667886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:20.388062  667886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:20.388147  667886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:20.418226  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:44:20.529977  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:44:20.553778  667886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:44:20.563779  667886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:44:20.563859  667886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:44:20.597862  667886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:44:20.628348  667886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:44:20.633844  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:44:20.639460  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:44:20.647375  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:44:20.676467  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:44:20.719506  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:44:20.764941  667886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:44:20.807299  667886 kubeadm.go:392] StartCluster: {Name:pause-880805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-880805 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:20.807452  667886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:44:20.807555  667886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:44:21.023526  667886 cri.go:89] found id: "48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc"
	I0317 13:44:21.023576  667886 cri.go:89] found id: "0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234"
	I0317 13:44:21.023582  667886 cri.go:89] found id: "bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb"
	I0317 13:44:21.023587  667886 cri.go:89] found id: "e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9"
	I0317 13:44:21.023592  667886 cri.go:89] found id: "43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523"
	I0317 13:44:21.023597  667886 cri.go:89] found id: "8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2"
	I0317 13:44:21.023601  667886 cri.go:89] found id: "52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c"
	I0317 13:44:21.023605  667886 cri.go:89] found id: "825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b"
	I0317 13:44:21.023608  667886 cri.go:89] found id: "f27f490b29263ee5a59064e2ba91a6135334641b3c1d0e269eafcd9e8d51b8d4"
	I0317 13:44:21.023619  667886 cri.go:89] found id: "8d94b340edcaf8330770907cb316e84af06980194bcedeac7a6a85ef9edfe908"
	I0317 13:44:21.023624  667886 cri.go:89] found id: "3e06072ef41edf254b60c17c41c093da497170312a62382cabecb4114b593c5c"
	I0317 13:44:21.023630  667886 cri.go:89] found id: ""
	I0317 13:44:21.023688  667886 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-880805 -n pause-880805
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-880805 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-880805 logs -n 25: (1.408306336s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo docker                         | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo find                           | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo crio                           | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-788750                                     | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:43 UTC |
	| start   | -p cert-expiration-355456                            | cert-expiration-355456    | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-880805                                      | pause-880805              | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-662195                          | force-systemd-env-662195  | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	| start   | -p force-systemd-flag-638911                         | force-systemd-flag-638911 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-312638                         | kubernetes-upgrade-312638 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:44:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:44:29.845999  668474 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:44:29.846124  668474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:44:29.846134  668474 out.go:358] Setting ErrFile to fd 2...
	I0317 13:44:29.846141  668474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:44:29.846405  668474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:44:29.846954  668474 out.go:352] Setting JSON to false
	I0317 13:44:29.848113  668474 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12414,"bootTime":1742206656,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:44:29.848216  668474 start.go:139] virtualization: kvm guest
	I0317 13:44:29.850375  668474 out.go:177] * [kubernetes-upgrade-312638] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:44:29.851693  668474 notify.go:220] Checking for updates...
	I0317 13:44:29.851699  668474 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:44:29.852921  668474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:44:29.854069  668474 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:44:29.855302  668474 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:44:29.856589  668474 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:44:29.857772  668474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:44:29.859606  668474 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:44:29.860240  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:29.860329  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:29.876137  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0317 13:44:29.876654  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:29.877169  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:29.877192  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:29.877656  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:29.877859  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:29.878243  668474 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:44:29.878683  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:29.878732  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:29.894672  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0317 13:44:29.895269  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:29.895783  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:29.895820  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:29.896218  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:29.896377  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:29.937890  668474 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 13:44:29.939292  668474 start.go:297] selected driver: kvm2
	I0317 13:44:29.939314  668474 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:29.939430  668474 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:44:29.940675  668474 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:44:29.940798  668474 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:44:29.971270  668474 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:44:29.971935  668474 cni.go:84] Creating CNI manager for ""
	I0317 13:44:29.972001  668474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:29.972062  668474 start.go:340] cluster config:
	{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:29.972208  668474 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:44:29.974947  668474 out.go:177] * Starting "kubernetes-upgrade-312638" primary control-plane node in "kubernetes-upgrade-312638" cluster
	I0317 13:44:26.744778  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:26.745218  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:26.745238  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:26.745209  668239 retry.go:31] will retry after 1.580348632s: waiting for domain to come up
	I0317 13:44:28.326768  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:28.327396  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:28.327425  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:28.327370  668239 retry.go:31] will retry after 2.363365443s: waiting for domain to come up
	I0317 13:44:32.980491  667886 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc 0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234 bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9 43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523 8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2 52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b f27f490b29263ee5a59064e2ba91a6135334641b3c1d0e269eafcd9e8d51b8d4 8d94b340edcaf8330770907cb316e84af06980194bcedeac7a6a85ef9edfe908 3e06072ef41edf254b60c17c41c093da497170312a62382cabecb4114b593c5c: (11.632534719s)
	W0317 13:44:32.980595  667886 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc 0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234 bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9 43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523 8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2 52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b f27f490b29263ee5a59064e2ba91a6135334641b3c1d0e269eafcd9e8d51b8d4 8d94b340edcaf8330770907cb316e84af06980194bcedeac7a6a85ef9edfe908 3e06072ef41edf254b60c17c41c093da497170312a62382cabecb4114b593c5c: Process exited with status 1
	stdout:
	48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc
	0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234
	bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb
	e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9
	43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523
	8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2
	52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c
	
	stderr:
	E0317 13:44:32.957310    3049 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": container with ID starting with 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b not found: ID does not exist" containerID="825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b"
	time="2025-03-17T13:44:32Z" level=fatal msg="stopping the container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": rpc error: code = NotFound desc = could not find container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": container with ID starting with 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b not found: ID does not exist"
	I0317 13:44:32.980675  667886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:44:33.029861  667886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:44:33.039891  667886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Mar 17 13:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Mar 17 13:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 17 13:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Mar 17 13:43 /etc/kubernetes/scheduler.conf
	
	I0317 13:44:33.039987  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:44:33.048987  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:44:33.057773  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:44:33.066431  667886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:44:33.066503  667886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:44:33.076083  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:44:33.085292  667886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:44:33.085359  667886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:44:33.094458  667886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:44:33.103384  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:33.159411  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.534259  667886 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.374781885s)
	I0317 13:44:34.534299  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.740030  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.810074  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:29.976251  668474 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:44:29.976305  668474 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:44:29.976319  668474 cache.go:56] Caching tarball of preloaded images
	I0317 13:44:29.976429  668474 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:44:29.976445  668474 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:44:29.976565  668474 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/config.json ...
	I0317 13:44:29.976826  668474 start.go:360] acquireMachinesLock for kubernetes-upgrade-312638: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:44:30.692746  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:30.693214  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:30.693241  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:30.693181  668239 retry.go:31] will retry after 2.744285626s: waiting for domain to come up
	I0317 13:44:33.439543  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:33.440110  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:33.440166  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:33.440108  668239 retry.go:31] will retry after 3.306472858s: waiting for domain to come up
	I0317 13:44:34.914417  667886 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:44:34.914531  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:35.414658  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:35.477684  667886 api_server.go:72] duration metric: took 563.267944ms to wait for apiserver process to appear ...
	I0317 13:44:35.477711  667886 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:44:35.477773  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:35.478254  667886 api_server.go:269] stopped: https://192.168.39.171:8443/healthz: Get "https://192.168.39.171:8443/healthz": dial tcp 192.168.39.171:8443: connect: connection refused
	I0317 13:44:35.978463  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.508964  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:44:38.508993  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:44:38.509008  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.547367  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:44:38.547396  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:44:38.977973  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.982339  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:44:38.982368  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:44:39.478003  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:39.483545  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:44:39.483576  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:44:39.978195  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:39.982238  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0317 13:44:39.988851  667886 api_server.go:141] control plane version: v1.32.2
	I0317 13:44:39.988879  667886 api_server.go:131] duration metric: took 4.511160464s to wait for apiserver health ...
	I0317 13:44:39.988891  667886 cni.go:84] Creating CNI manager for ""
	I0317 13:44:39.988901  667886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:39.990601  667886 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:44:36.750826  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:36.751362  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:36.751417  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:36.751362  668239 retry.go:31] will retry after 4.311400463s: waiting for domain to come up
	I0317 13:44:42.380333  668474 start.go:364] duration metric: took 12.403452968s to acquireMachinesLock for "kubernetes-upgrade-312638"
	I0317 13:44:42.380400  668474 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:44:42.380409  668474 fix.go:54] fixHost starting: 
	I0317 13:44:42.380901  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:42.380960  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:42.400444  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0317 13:44:42.400877  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:42.401298  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:42.401321  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:42.401685  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:42.401899  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:42.402042  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetState
	I0317 13:44:42.403739  668474 fix.go:112] recreateIfNeeded on kubernetes-upgrade-312638: state=Stopped err=<nil>
	I0317 13:44:42.403770  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	W0317 13:44:42.403925  668474 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:44:42.405416  668474 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-312638" ...
	I0317 13:44:41.065610  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.066281  668126 main.go:141] libmachine: (force-systemd-flag-638911) found domain IP: 192.168.61.182
	I0317 13:44:41.066308  668126 main.go:141] libmachine: (force-systemd-flag-638911) reserving static IP address...
	I0317 13:44:41.066325  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has current primary IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.066785  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-638911", mac: "52:54:00:22:7e:2e", ip: "192.168.61.182"} in network mk-force-systemd-flag-638911
	I0317 13:44:41.146339  668126 main.go:141] libmachine: (force-systemd-flag-638911) reserved static IP address 192.168.61.182 for domain force-systemd-flag-638911
	I0317 13:44:41.146375  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Getting to WaitForSSH function...
	I0317 13:44:41.146384  668126 main.go:141] libmachine: (force-systemd-flag-638911) waiting for SSH...
	I0317 13:44:41.149162  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.149689  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.149722  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.149869  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Using SSH client type: external
	I0317 13:44:41.149911  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa (-rw-------)
	I0317 13:44:41.149951  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:44:41.149964  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | About to run SSH command:
	I0317 13:44:41.149999  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | exit 0
	I0317 13:44:41.271224  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | SSH cmd err, output: <nil>: 
	I0317 13:44:41.271491  668126 main.go:141] libmachine: (force-systemd-flag-638911) KVM machine creation complete
	I0317 13:44:41.271854  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetConfigRaw
	I0317 13:44:41.272413  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:41.272588  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:41.272743  668126 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:44:41.272758  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetState
	I0317 13:44:41.274169  668126 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:44:41.274183  668126 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:44:41.274188  668126 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:44:41.274194  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.276641  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.277049  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.277080  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.277234  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.277429  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.277596  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.277762  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.277930  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.278192  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.278203  668126 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:44:41.378705  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:44:41.378740  668126 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:44:41.378753  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.381523  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.381922  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.381956  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.382179  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.382369  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.382549  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.382695  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.382924  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.383187  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.383202  668126 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:44:41.487953  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:44:41.488057  668126 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:44:41.488071  668126 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:44:41.488087  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.488390  668126 buildroot.go:166] provisioning hostname "force-systemd-flag-638911"
	I0317 13:44:41.488424  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.488639  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.491049  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.491428  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.491458  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.491666  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.491815  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.491984  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.492163  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.492335  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.492588  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.492601  668126 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-638911 && echo "force-systemd-flag-638911" | sudo tee /etc/hostname
	I0317 13:44:41.604607  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-638911
	
	I0317 13:44:41.604640  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.607102  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.607434  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.607484  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.607721  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.607912  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.608084  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.608193  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.608368  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.608558  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.608573  668126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-638911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-638911/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-638911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:44:41.720265  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:44:41.720370  668126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:44:41.720423  668126 buildroot.go:174] setting up certificates
	I0317 13:44:41.720439  668126 provision.go:84] configureAuth start
	I0317 13:44:41.720461  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.720826  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:41.723694  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.724080  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.724120  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.724347  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.726962  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.727372  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.727403  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.727624  668126 provision.go:143] copyHostCerts
	I0317 13:44:41.727665  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:44:41.727700  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:44:41.727712  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:44:41.727765  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:44:41.727840  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:44:41.727857  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:44:41.727864  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:44:41.727883  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:44:41.727925  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:44:41.727941  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:44:41.727947  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:44:41.727965  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:44:41.728011  668126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-638911 san=[127.0.0.1 192.168.61.182 force-systemd-flag-638911 localhost minikube]
	I0317 13:44:41.762446  668126 provision.go:177] copyRemoteCerts
	I0317 13:44:41.762499  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:44:41.762525  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.765386  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.765790  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.765819  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.765956  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.766150  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.766351  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.766519  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:41.849046  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0317 13:44:41.849120  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:44:41.872224  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0317 13:44:41.872293  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:44:41.895163  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0317 13:44:41.895241  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:44:41.917926  668126 provision.go:87] duration metric: took 197.472972ms to configureAuth
	I0317 13:44:41.917949  668126 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:44:41.918131  668126 config.go:182] Loaded profile config "force-systemd-flag-638911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:41.918216  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.920667  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.921020  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.921052  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.921307  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.921507  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.921665  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.921772  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.921894  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.922104  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.922124  668126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:44:42.134089  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:44:42.134123  668126 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:44:42.134132  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetURL
	I0317 13:44:42.135603  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | using libvirt version 6000000
	I0317 13:44:42.138088  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.138439  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.138461  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.138660  668126 main.go:141] libmachine: Docker is up and running!
	I0317 13:44:42.138678  668126 main.go:141] libmachine: Reticulating splines...
	I0317 13:44:42.138685  668126 client.go:171] duration metric: took 24.91957911s to LocalClient.Create
	I0317 13:44:42.138711  668126 start.go:167] duration metric: took 24.919645187s to libmachine.API.Create "force-systemd-flag-638911"
	I0317 13:44:42.138722  668126 start.go:293] postStartSetup for "force-systemd-flag-638911" (driver="kvm2")
	I0317 13:44:42.138731  668126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:44:42.138746  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.138991  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:44:42.139018  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.141026  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.141420  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.141445  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.141556  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.141744  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.141878  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.142005  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.222345  668126 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:44:42.226512  668126 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:44:42.226537  668126 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:44:42.226614  668126 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:44:42.226729  668126 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:44:42.226745  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> /etc/ssl/certs/6291882.pem
	I0317 13:44:42.226870  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:44:42.236270  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:42.263300  668126 start.go:296] duration metric: took 124.564546ms for postStartSetup
	I0317 13:44:42.263356  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetConfigRaw
	I0317 13:44:42.263989  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:42.266671  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.267069  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.267102  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.267400  668126 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/config.json ...
	I0317 13:44:42.267647  668126 start.go:128] duration metric: took 25.223324534s to createHost
	I0317 13:44:42.267674  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.270166  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.270519  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.270554  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.270745  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.270979  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.271150  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.271339  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.271570  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:42.271862  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:42.271876  668126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:44:42.380114  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219082.348913716
	
	I0317 13:44:42.380150  668126 fix.go:216] guest clock: 1742219082.348913716
	I0317 13:44:42.380171  668126 fix.go:229] Guest: 2025-03-17 13:44:42.348913716 +0000 UTC Remote: 2025-03-17 13:44:42.267660829 +0000 UTC m=+41.869578656 (delta=81.252887ms)
	I0317 13:44:42.380203  668126 fix.go:200] guest clock delta is within tolerance: 81.252887ms
	I0317 13:44:42.380212  668126 start.go:83] releasing machines lock for "force-systemd-flag-638911", held for 25.336097901s
	I0317 13:44:42.380264  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.380567  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:42.383699  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.384167  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.384199  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.384397  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385051  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385250  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385330  668126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:44:42.385378  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.385499  668126 ssh_runner.go:195] Run: cat /version.json
	I0317 13:44:42.385525  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.388428  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388498  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388870  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.388894  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388918  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.388931  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.389214  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.389318  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.389402  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.389470  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.389583  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.389655  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.389736  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.389815  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.493984  668126 ssh_runner.go:195] Run: systemctl --version
	I0317 13:44:42.500108  668126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:44:42.657860  668126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:44:42.664405  668126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:44:42.664469  668126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:44:42.679993  668126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:44:42.680016  668126 start.go:495] detecting cgroup driver to use...
	I0317 13:44:42.680031  668126 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0317 13:44:42.680075  668126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:44:42.699983  668126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:44:42.714052  668126 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:44:42.714111  668126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:44:42.727777  668126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:44:42.741547  668126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:44:42.859998  668126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:44:43.002917  668126 docker.go:233] disabling docker service ...
	I0317 13:44:43.002996  668126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:44:43.017621  668126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:44:43.032372  668126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:44:43.182125  668126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:44:43.318786  668126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:44:43.333035  668126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:44:43.351083  668126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:44:43.351160  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.361576  668126 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0317 13:44:43.361644  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.371685  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.381559  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.396859  668126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:44:43.409659  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.423207  668126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.442208  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.453824  668126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:44:43.463122  668126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:44:43.463188  668126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:44:43.475476  668126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:44:43.490099  668126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:43.620896  668126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:44:43.723544  668126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:44:43.723626  668126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:44:43.728420  668126 start.go:563] Will wait 60s for crictl version
	I0317 13:44:43.728465  668126 ssh_runner.go:195] Run: which crictl
	I0317 13:44:43.731806  668126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:44:43.767301  668126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:44:43.767405  668126 ssh_runner.go:195] Run: crio --version
	I0317 13:44:43.794226  668126 ssh_runner.go:195] Run: crio --version
	I0317 13:44:43.821952  668126 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:44:39.991881  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:44:40.014401  667886 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:44:40.034790  667886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:44:40.039178  667886 system_pods.go:59] 6 kube-system pods found
	I0317 13:44:40.039213  667886 system_pods.go:61] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:40.039226  667886 system_pods.go:61] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:44:40.039235  667886 system_pods.go:61] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:44:40.039247  667886 system_pods.go:61] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:44:40.039254  667886 system_pods.go:61] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:40.039265  667886 system_pods.go:61] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:44:40.039278  667886 system_pods.go:74] duration metric: took 4.46195ms to wait for pod list to return data ...
	I0317 13:44:40.039299  667886 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:44:40.041945  667886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:44:40.041969  667886 node_conditions.go:123] node cpu capacity is 2
	I0317 13:44:40.041980  667886 node_conditions.go:105] duration metric: took 2.673013ms to run NodePressure ...
	I0317 13:44:40.041996  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:40.312302  667886 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0317 13:44:40.314905  667886 kubeadm.go:739] kubelet initialised
	I0317 13:44:40.314928  667886 kubeadm.go:740] duration metric: took 2.598978ms waiting for restarted kubelet to initialise ...
	I0317 13:44:40.314939  667886 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:40.317212  667886 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:40.321121  667886 pod_ready.go:93] pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:40.321141  667886 pod_ready.go:82] duration metric: took 3.900997ms for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:40.321149  667886 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:42.327995  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:44.328639  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:42.406502  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .Start
	I0317 13:44:42.406680  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) starting domain...
	I0317 13:44:42.406714  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) ensuring networks are active...
	I0317 13:44:42.407490  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network default is active
	I0317 13:44:42.407936  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network mk-kubernetes-upgrade-312638 is active
	I0317 13:44:42.408345  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) getting domain XML...
	I0317 13:44:42.409190  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) creating domain...
	I0317 13:44:43.726079  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) waiting for IP...
	I0317 13:44:43.727131  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.727721  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.727807  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:43.727700  668579 retry.go:31] will retry after 254.488886ms: waiting for domain to come up
	I0317 13:44:43.984328  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.984791  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.984850  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:43.984768  668579 retry.go:31] will retry after 259.583433ms: waiting for domain to come up
	I0317 13:44:44.246322  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.247055  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.247089  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:44.247017  668579 retry.go:31] will retry after 385.8999ms: waiting for domain to come up
	I0317 13:44:44.634847  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.635476  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.635508  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:44.635445  668579 retry.go:31] will retry after 413.669683ms: waiting for domain to come up
	I0317 13:44:43.823417  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:43.826970  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:43.827552  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:43.827581  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:43.827873  668126 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0317 13:44:43.831824  668126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:44:43.844263  668126 kubeadm.go:883] updating cluster {Name:force-systemd-flag-638911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:forc
e-systemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:44:43.844358  668126 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:44:43.844415  668126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:43.874831  668126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:44:43.874901  668126 ssh_runner.go:195] Run: which lz4
	I0317 13:44:43.878477  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0317 13:44:43.878601  668126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:44:43.882516  668126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:44:43.882549  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:44:45.138506  668126 crio.go:462] duration metric: took 1.259947616s to copy over tarball
	I0317 13:44:45.138603  668126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:44:46.827476  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:49.418342  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:45.051026  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.051634  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.051665  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:45.051597  668579 retry.go:31] will retry after 723.318576ms: waiting for domain to come up
	I0317 13:44:45.776707  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.777269  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.777303  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:45.777198  668579 retry.go:31] will retry after 724.270735ms: waiting for domain to come up
	I0317 13:44:46.503036  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:46.503704  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:46.503726  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:46.503671  668579 retry.go:31] will retry after 992.581309ms: waiting for domain to come up
	I0317 13:44:47.498301  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:47.498798  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:47.498826  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:47.498768  668579 retry.go:31] will retry after 1.30814635s: waiting for domain to come up
	I0317 13:44:48.808842  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:48.809343  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:48.809402  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:48.809312  668579 retry.go:31] will retry after 1.844453207s: waiting for domain to come up
	I0317 13:44:47.418336  668126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279689899s)
	I0317 13:44:47.418390  668126 crio.go:469] duration metric: took 2.279841917s to extract the tarball
	I0317 13:44:47.418402  668126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:44:47.456429  668126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:47.502560  668126 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:44:47.502587  668126 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:44:47.502597  668126 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.32.2 crio true true} ...
	I0317 13:44:47.502719  668126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-638911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:force-systemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:44:47.502800  668126 ssh_runner.go:195] Run: crio config
	I0317 13:44:47.553965  668126 cni.go:84] Creating CNI manager for ""
	I0317 13:44:47.553988  668126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:47.554001  668126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:44:47.554029  668126 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-638911 NodeName:force-systemd-flag-638911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:44:47.554191  668126 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-638911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:44:47.554272  668126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:44:47.563769  668126 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:44:47.563848  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:44:47.572766  668126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0317 13:44:47.588068  668126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:44:47.605899  668126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2304 bytes)
	I0317 13:44:47.623593  668126 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I0317 13:44:47.627413  668126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:44:47.639758  668126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:47.789329  668126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:47.805998  668126 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911 for IP: 192.168.61.182
	I0317 13:44:47.806030  668126 certs.go:194] generating shared ca certs ...
	I0317 13:44:47.806053  668126 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:47.806291  668126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:44:47.806366  668126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:44:47.806383  668126 certs.go:256] generating profile certs ...
	I0317 13:44:47.806464  668126 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key
	I0317 13:44:47.806502  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt with IP's: []
	I0317 13:44:47.999886  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt ...
	I0317 13:44:47.999920  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt: {Name:mk611b0bbba778e4de9b41db564bb4b16aaed850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.000097  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key ...
	I0317 13:44:48.000111  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key: {Name:mk0b06d3168bd784bc71540e69b8e94432e272e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.000192  668126 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8
	I0317 13:44:48.000207  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.182]
	I0317 13:44:48.182682  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 ...
	I0317 13:44:48.182727  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8: {Name:mk55bd44ee31dabbe68f2fd171c30c67905f1132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.182937  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8 ...
	I0317 13:44:48.182958  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8: {Name:mkfc54bc38333888b307dbf401857e83a3257d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.183066  668126 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt
	I0317 13:44:48.183162  668126 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key
	I0317 13:44:48.183241  668126 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key
	I0317 13:44:48.183264  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt with IP's: []
	I0317 13:44:48.555133  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt ...
	I0317 13:44:48.555169  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt: {Name:mkd271a39c3d669ae2c876cd1c996b14968810ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.555383  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key ...
	I0317 13:44:48.555409  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key: {Name:mkdd41f47afee4b3c05d7b36f24cfff859415a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.555526  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 13:44:48.555576  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0317 13:44:48.555592  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 13:44:48.555605  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 13:44:48.555617  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 13:44:48.555630  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 13:44:48.555641  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 13:44:48.555654  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 13:44:48.555706  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:44:48.555741  668126 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:44:48.555751  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:44:48.555771  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:44:48.555794  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:44:48.555817  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:44:48.555854  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:48.555881  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.555896  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.555908  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem -> /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.556419  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:44:48.588045  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:44:48.612710  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:44:48.654612  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:44:48.678360  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:44:48.701390  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:44:48.727299  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:44:48.751701  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:44:48.774711  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:44:48.797699  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:44:48.820688  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:44:48.844527  668126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:44:48.861291  668126 ssh_runner.go:195] Run: openssl version
	I0317 13:44:48.866773  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:44:48.877330  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.882084  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.882152  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.887901  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:44:48.899086  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:44:48.909739  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.915543  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.915605  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.924887  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:44:48.938140  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:44:48.948748  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.953337  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.953393  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.959115  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:44:48.970225  668126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:44:48.974319  668126 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:44:48.974380  668126 kubeadm.go:392] StartCluster: {Name:force-systemd-flag-638911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:force-s
ystemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:48.974481  668126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:44:48.974542  668126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:44:49.012180  668126 cri.go:89] found id: ""
	I0317 13:44:49.012279  668126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:44:49.023049  668126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:44:49.034835  668126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:44:49.046524  668126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:44:49.046555  668126 kubeadm.go:157] found existing configuration files:
	
	I0317 13:44:49.046604  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:44:49.056595  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:44:49.056659  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:44:49.066849  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:44:49.075875  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:44:49.075942  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:44:49.085012  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:44:49.094603  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:44:49.094682  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:44:49.104087  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:44:49.113518  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:44:49.113583  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:44:49.125534  668126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:44:49.338474  668126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:44:51.828302  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:52.828238  667886 pod_ready.go:93] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.828262  667886 pod_ready.go:82] duration metric: took 12.507107151s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.828278  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.833008  667886 pod_ready.go:93] pod "kube-apiserver-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.833036  667886 pod_ready.go:82] duration metric: took 4.749094ms for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.833052  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.837249  667886 pod_ready.go:93] pod "kube-controller-manager-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.837274  667886 pod_ready.go:82] duration metric: took 4.21448ms for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.837283  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.841206  667886 pod_ready.go:93] pod "kube-proxy-j6xzf" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.841229  667886 pod_ready.go:82] duration metric: took 3.938504ms for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.841241  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:50.655076  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:50.655609  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:50.655639  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:50.655581  668579 retry.go:31] will retry after 1.885660977s: waiting for domain to come up
	I0317 13:44:52.543156  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:52.543812  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:52.543867  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:52.543756  668579 retry.go:31] will retry after 2.68611123s: waiting for domain to come up
	I0317 13:44:54.847821  667886 pod_ready.go:93] pod "kube-scheduler-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:54.847920  667886 pod_ready.go:82] duration metric: took 2.006666341s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:54.847948  667886 pod_ready.go:39] duration metric: took 14.53299614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:54.847998  667886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:44:54.863335  667886 ops.go:34] apiserver oom_adj: -16
	I0317 13:44:54.863426  667886 kubeadm.go:597] duration metric: took 33.615812738s to restartPrimaryControlPlane
	I0317 13:44:54.863453  667886 kubeadm.go:394] duration metric: took 34.056168573s to StartCluster
	I0317 13:44:54.863501  667886 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:54.863601  667886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:44:54.864842  667886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:54.865137  667886 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:44:54.865408  667886 config.go:182] Loaded profile config "pause-880805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:54.865433  667886 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:44:54.866805  667886 out.go:177] * Enabled addons: 
	I0317 13:44:54.866812  667886 out.go:177] * Verifying Kubernetes components...
	I0317 13:44:54.868082  667886 addons.go:514] duration metric: took 2.655448ms for enable addons: enabled=[]
	I0317 13:44:54.868155  667886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:55.046597  667886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:55.062303  667886 node_ready.go:35] waiting up to 6m0s for node "pause-880805" to be "Ready" ...
	I0317 13:44:55.065238  667886 node_ready.go:49] node "pause-880805" has status "Ready":"True"
	I0317 13:44:55.065263  667886 node_ready.go:38] duration metric: took 2.908667ms for node "pause-880805" to be "Ready" ...
	I0317 13:44:55.065274  667886 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:55.067875  667886 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.226383  667886 pod_ready.go:93] pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:55.226412  667886 pod_ready.go:82] duration metric: took 158.507648ms for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.226423  667886 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.625495  667886 pod_ready.go:93] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:55.625527  667886 pod_ready.go:82] duration metric: took 399.096546ms for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.625540  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.026020  667886 pod_ready.go:93] pod "kube-apiserver-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.026044  667886 pod_ready.go:82] duration metric: took 400.496433ms for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.026055  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.425541  667886 pod_ready.go:93] pod "kube-controller-manager-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.425566  667886 pod_ready.go:82] duration metric: took 399.504164ms for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.425575  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.825728  667886 pod_ready.go:93] pod "kube-proxy-j6xzf" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.825764  667886 pod_ready.go:82] duration metric: took 400.180922ms for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.825779  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:57.225623  667886 pod_ready.go:93] pod "kube-scheduler-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:57.225655  667886 pod_ready.go:82] duration metric: took 399.866844ms for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:57.225666  667886 pod_ready.go:39] duration metric: took 2.160376658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:57.225686  667886 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:44:57.225752  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:57.240457  667886 api_server.go:72] duration metric: took 2.375250516s to wait for apiserver process to appear ...
	I0317 13:44:57.240489  667886 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:44:57.240508  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:57.244616  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0317 13:44:57.245643  667886 api_server.go:141] control plane version: v1.32.2
	I0317 13:44:57.245670  667886 api_server.go:131] duration metric: took 5.173045ms to wait for apiserver health ...
	I0317 13:44:57.245681  667886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:44:57.426533  667886 system_pods.go:59] 6 kube-system pods found
	I0317 13:44:57.426574  667886 system_pods.go:61] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:57.426585  667886 system_pods.go:61] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running
	I0317 13:44:57.426592  667886 system_pods.go:61] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running
	I0317 13:44:57.426597  667886 system_pods.go:61] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running
	I0317 13:44:57.426603  667886 system_pods.go:61] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:57.426611  667886 system_pods.go:61] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running
	I0317 13:44:57.426619  667886 system_pods.go:74] duration metric: took 180.929907ms to wait for pod list to return data ...
	I0317 13:44:57.426629  667886 default_sa.go:34] waiting for default service account to be created ...
	I0317 13:44:57.626004  667886 default_sa.go:45] found service account: "default"
	I0317 13:44:57.626039  667886 default_sa.go:55] duration metric: took 199.401959ms for default service account to be created ...
	I0317 13:44:57.626052  667886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 13:44:57.826047  667886 system_pods.go:86] 6 kube-system pods found
	I0317 13:44:57.826090  667886 system_pods.go:89] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:57.826097  667886 system_pods.go:89] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running
	I0317 13:44:57.826105  667886 system_pods.go:89] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running
	I0317 13:44:57.826110  667886 system_pods.go:89] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running
	I0317 13:44:57.826114  667886 system_pods.go:89] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:57.826119  667886 system_pods.go:89] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running
	I0317 13:44:57.826129  667886 system_pods.go:126] duration metric: took 200.069419ms to wait for k8s-apps to be running ...
	I0317 13:44:57.826138  667886 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 13:44:57.826201  667886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:44:57.843116  667886 system_svc.go:56] duration metric: took 16.963351ms WaitForService to wait for kubelet
	I0317 13:44:57.843159  667886 kubeadm.go:582] duration metric: took 2.977959339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:44:57.843184  667886 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:44:58.025516  667886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:44:58.025547  667886 node_conditions.go:123] node cpu capacity is 2
	I0317 13:44:58.025559  667886 node_conditions.go:105] duration metric: took 182.370464ms to run NodePressure ...
	I0317 13:44:58.025573  667886 start.go:241] waiting for startup goroutines ...
	I0317 13:44:58.025579  667886 start.go:246] waiting for cluster config update ...
	I0317 13:44:58.025588  667886 start.go:255] writing updated cluster config ...
	I0317 13:44:58.025922  667886 ssh_runner.go:195] Run: rm -f paused
	I0317 13:44:58.078147  667886 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:44:58.080186  667886 out.go:177] * Done! kubectl is now configured to use "pause-880805" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.810570192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16c9de57-a17b-4705-8a34-36329253c603 name=/runtime.v1.RuntimeService/Version
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.817648882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=620ae8b6-c088-4cd3-beb6-9bd55f969c1b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.818244342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219098818205861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=620ae8b6-c088-4cd3-beb6-9bd55f969c1b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.818937564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4ebca03-a409-49bc-8c37-58f219b1842b name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.819034425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4ebca03-a409-49bc-8c37-58f219b1842b name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.819273961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4ebca03-a409-49bc-8c37-58f219b1842b name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.840468039Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=562461cb-3d48-4896-ae88-0e5f30bcbd50 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.841069643Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-nttjk,Uid:b185d851-a2d4-4a9f-a30b-26d34b39beeb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1742219060379575162,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T13:43:37.103017948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-880805,Uid:c4b06f6c607f93299e404e8065aa6c4c,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1742219060235766431,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c4b06f6c607f93299e404e8065aa6c4c,kubernetes.io/config.seen: 2025-03-17T13:43:31.902592648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-880805,Uid:ad2bdc060f307a9090c1aa6b0520c197,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1742219060197110585,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307
a9090c1aa6b0520c197,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ad2bdc060f307a9090c1aa6b0520c197,kubernetes.io/config.seen: 2025-03-17T13:43:31.902591437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-880805,Uid:20aeb65e4e70616736efe266d0bd89c2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1742219060196574169,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: 20aeb65e4e70616736efe266d0bd89c2,kubernetes.io/config.seen: 2025-03-17T13:43:31.902590192Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&PodSandboxMetadata{Name:etcd-pause-880805,Uid:a2d84b66050c0312b166d06ad0951247,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1742219060176867001,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.171:2379,kubernetes.io/config.hash: a2d84b66050c0312b166d06ad0951247,kubernetes.io/config.seen: 2025-03-17T13:43:31.902586142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&PodSandboxMetadata{Name:kube-proxy-j6xzf,Uid:735bd65e-41e7-48bc-b9c2-c6fdda988310,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1742219060163915553,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T13:43:36.488476207Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=562461cb-3d48-4896-ae88-0e5f30bcbd50 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.842217624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee2cbbe2-c150-4547-8ce3-9db1eb5e951f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.842287665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee2cbbe2-c150-4547-8ce3-9db1eb5e951f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.842431582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee2cbbe2-c150-4547-8ce3-9db1eb5e951f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.866410638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=870663fe-2cf5-4494-9e27-58ad3c37bb5e name=/runtime.v1.RuntimeService/Version
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.866508221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=870663fe-2cf5-4494-9e27-58ad3c37bb5e name=/runtime.v1.RuntimeService/Version
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.867662047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3524996b-17b2-4cec-b094-6b2c08f64f43 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.868051116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219098868030209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3524996b-17b2-4cec-b094-6b2c08f64f43 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.868686946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e54dd57-a051-4a77-a050-2e872abedb35 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.868804741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e54dd57-a051-4a77-a050-2e872abedb35 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.869147971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e54dd57-a051-4a77-a050-2e872abedb35 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.912872350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ab60818-3669-4ae8-9d33-459dbfc1f102 name=/runtime.v1.RuntimeService/Version
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.912987865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ab60818-3669-4ae8-9d33-459dbfc1f102 name=/runtime.v1.RuntimeService/Version
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.914049666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc6789ac-2049-4034-b3d0-08079a11a9cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.914411628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219098914389576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc6789ac-2049-4034-b3d0-08079a11a9cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.914802434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=154a0edc-0866-4a57-ba0f-9c3039c2035f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.914870979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=154a0edc-0866-4a57-ba0f-9c3039c2035f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:44:58 pause-880805 crio[2349]: time="2025-03-17 13:44:58.915134407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=154a0edc-0866-4a57-ba0f-9c3039c2035f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0f4bba918c2ed       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   19 seconds ago       Running             kube-proxy                2                   09d43634e3c78       kube-proxy-j6xzf
	1accdee979041       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   23 seconds ago       Running             kube-scheduler            2                   a55f033e05b58       kube-scheduler-pause-880805
	f93948e4f8ff3       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   23 seconds ago       Running             kube-controller-manager   2                   3491c828f7b1d       kube-controller-manager-pause-880805
	a03eefae624c5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago       Running             etcd                      2                   9c7a532ac45a3       etcd-pause-880805
	2914ee1335aa2       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   23 seconds ago       Running             kube-apiserver            2                   5de94530a8874       kube-apiserver-pause-880805
	55b0efa9e2456       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   37 seconds ago       Running             coredns                   1                   fafd1e94e9500       coredns-668d6bf9bc-nttjk
	48c1359285825       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   38 seconds ago       Exited              etcd                      1                   9c7a532ac45a3       etcd-pause-880805
	0a0da51f434a6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   38 seconds ago       Exited              kube-controller-manager   1                   3491c828f7b1d       kube-controller-manager-pause-880805
	bef8d4672be7b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   38 seconds ago       Exited              kube-apiserver            1                   5de94530a8874       kube-apiserver-pause-880805
	e21abd81d7508       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   38 seconds ago       Exited              kube-scheduler            1                   a55f033e05b58       kube-scheduler-pause-880805
	43efe3e98767e       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   38 seconds ago       Exited              kube-proxy                1                   09d43634e3c78       kube-proxy-j6xzf
	8116dadd952ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f543f844cad03       coredns-668d6bf9bc-nttjk
	
	
	==> coredns [55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[966311080]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.721) (total time: 10001ms):
	Trace[966311080]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.722)
	Trace[966311080]: [10.001013795s] [10.001013795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1762825214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.724) (total time: 10000ms):
	Trace[1762825214]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.725)
	Trace[1762825214]: [10.000795801s] [10.000795801s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[640229930]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.725) (total time: 10000ms):
	Trace[640229930]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.725)
	Trace[640229930]: [10.000500401s] [10.000500401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52791 - 19689 "HINFO IN 8981724560520950910.7206283758638428393. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053815495s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-880805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-880805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=pause-880805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T13_43_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 13:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-880805
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:44:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    pause-880805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c431cdce110945adbf98cae894dca125
	  System UUID:                c431cdce-1109-45ad-bf98-cae894dca125
	  Boot ID:                    9bde7860-4726-428d-a194-809967bcd0e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-nttjk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-pause-880805                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         87s
	  kube-system                 kube-apiserver-pause-880805             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-880805    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-j6xzf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-880805             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x7 over 93s)  kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s                kubelet          Node pause-880805 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           84s                node-controller  Node pause-880805 event: Registered Node pause-880805 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-880805 event: Registered Node pause-880805 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.941373] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.056121] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053744] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.191774] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.111552] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.243269] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.986973] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +5.364653] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.056661] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.505433] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.087221] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.273144] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.098100] kauditd_printk_skb: 18 callbacks suppressed
	[Mar17 13:44] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.084669] kauditd_printk_skb: 103 callbacks suppressed
	[  +0.073275] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.171305] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.144950] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.268208] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +1.504793] systemd-fstab-generator[2465]: Ignoring "noauto" option for root device
	[ +12.400095] kauditd_printk_skb: 197 callbacks suppressed
	[  +2.551811] systemd-fstab-generator[3298]: Ignoring "noauto" option for root device
	[  +4.535186] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.748351] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	
	
	==> etcd [48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc] <==
	{"level":"warn","ts":"2025-03-17T13:44:21.261703Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-03-17T13:44:21.262112Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.171:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.171:2380","--initial-cluster=pause-880805=https://192.168.39.171:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.171:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.171:2380","--name=pause-880805","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2025-03-17T13:44:21.269795Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-03-17T13:44:21.269843Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-03-17T13:44:21.269861Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2025-03-17T13:44:21.269915Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:44:21.272848Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"]}
	{"level":"info","ts":"2025-03-17T13:44:21.273462Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-880805","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clust
er-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2025-03-17T13:44:21.308860Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"24.537232ms"}
	{"level":"info","ts":"2025-03-17T13:44:21.331636Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-03-17T13:44:21.349830Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","commit-index":438}
	{"level":"info","ts":"2025-03-17T13:44:21.352229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=()"}
	{"level":"info","ts":"2025-03-17T13:44:21.354027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became follower at term 2"}
	{"level":"info","ts":"2025-03-17T13:44:21.354101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4e6b9cdcc1ed933f [peers: [], term: 2, commit: 438, applied: 0, lastindex: 438, lastterm: 2]"}
	{"level":"warn","ts":"2025-03-17T13:44:21.359329Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-03-17T13:44:21.421117Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":416}
	
	
	==> etcd [a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186] <==
	{"level":"info","ts":"2025-03-17T13:44:37.414207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:44:37.414238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T13:44:37.414317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:44:37.414890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:44:37.415558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2025-03-17T13:44:37.418381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:44:37.419085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:44:48.986948Z","caller":"traceutil/trace.go:171","msg":"trace[436761010] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"185.706103ms","start":"2025-03-17T13:44:48.801225Z","end":"2025-03-17T13:44:48.986931Z","steps":["trace[436761010] 'read index received'  (duration: 185.607147ms)","trace[436761010] 'applied index is now lower than readState.Index'  (duration: 98.451µs)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:44:48.987234Z","caller":"traceutil/trace.go:171","msg":"trace[729740229] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"212.726871ms","start":"2025-03-17T13:44:48.774487Z","end":"2025-03-17T13:44:48.987213Z","steps":["trace[729740229] 'process raft request'  (duration: 212.03331ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:48.988035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.798972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-880805\" limit:1 ","response":"range_response_count:1 size:5847"}
	{"level":"info","ts":"2025-03-17T13:44:48.988999Z","caller":"traceutil/trace.go:171","msg":"trace[1332939784] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-880805; range_end:; response_count:1; response_revision:487; }","duration":"187.774614ms","start":"2025-03-17T13:44:48.801173Z","end":"2025-03-17T13:44:48.988948Z","steps":["trace[1332939784] 'agreement among raft nodes before linearized reading'  (duration: 186.714602ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:49.393590Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.816947ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610363780214823200 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" mod_revision:420 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-03-17T13:44:49.393999Z","caller":"traceutil/trace.go:171","msg":"trace[1470306950] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"401.884895ms","start":"2025-03-17T13:44:48.992062Z","end":"2025-03-17T13:44:49.393946Z","steps":["trace[1470306950] 'read index received'  (duration: 113.223944ms)","trace[1470306950] 'applied index is now lower than readState.Index'  (duration: 288.659342ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.394333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.259079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-880805\" limit:1 ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2025-03-17T13:44:49.394441Z","caller":"traceutil/trace.go:171","msg":"trace[405305159] range","detail":"{range_begin:/registry/minions/pause-880805; range_end:; response_count:1; response_revision:488; }","duration":"402.394277ms","start":"2025-03-17T13:44:48.992038Z","end":"2025-03-17T13:44:49.394432Z","steps":["trace[405305159] 'agreement among raft nodes before linearized reading'  (duration: 402.204369ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:49.394593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T13:44:48.992024Z","time spent":"402.55052ms","remote":"127.0.0.1:40064","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5451,"request content":"key:\"/registry/minions/pause-880805\" limit:1 "}
	{"level":"info","ts":"2025-03-17T13:44:49.394451Z","caller":"traceutil/trace.go:171","msg":"trace[1027070799] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"574.051038ms","start":"2025-03-17T13:44:48.820380Z","end":"2025-03-17T13:44:49.394431Z","steps":["trace[1027070799] 'process raft request'  (duration: 285.006733ms)","trace[1027070799] 'compare'  (duration: 287.496908ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.394764Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T13:44:48.820358Z","time spent":"574.355321ms","remote":"127.0.0.1:40148","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" mod_revision:420 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" > >"}
	{"level":"warn","ts":"2025-03-17T13:44:49.969166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.87184ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610363780214823208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.171\" mod_revision:428 > success:<request_put:<key:\"/registry/masterleases/192.168.39.171\" value_size:67 lease:1386991743360047398 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.171\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-03-17T13:44:49.969324Z","caller":"traceutil/trace.go:171","msg":"trace[1588882162] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:532; }","duration":"167.908326ms","start":"2025-03-17T13:44:49.801401Z","end":"2025-03-17T13:44:49.969309Z","steps":["trace[1588882162] 'read index received'  (duration: 35.743009ms)","trace[1588882162] 'applied index is now lower than readState.Index'  (duration: 132.164397ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:44:49.969411Z","caller":"traceutil/trace.go:171","msg":"trace[918317538] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"193.138658ms","start":"2025-03-17T13:44:49.776252Z","end":"2025-03-17T13:44:49.969391Z","steps":["trace[918317538] 'process raft request'  (duration: 60.945004ms)","trace[918317538] 'compare'  (duration: 131.65165ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.969437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.054811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-880805\" limit:1 ","response":"range_response_count:1 size:5847"}
	{"level":"info","ts":"2025-03-17T13:44:49.969539Z","caller":"traceutil/trace.go:171","msg":"trace[1770803393] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-880805; range_end:; response_count:1; response_revision:489; }","duration":"168.186629ms","start":"2025-03-17T13:44:49.801344Z","end":"2025-03-17T13:44:49.969531Z","steps":["trace[1770803393] 'agreement among raft nodes before linearized reading'  (duration: 168.022777ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:44:51.338639Z","caller":"traceutil/trace.go:171","msg":"trace[631472093] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"250.406614ms","start":"2025-03-17T13:44:51.088215Z","end":"2025-03-17T13:44:51.338622Z","steps":["trace[631472093] 'process raft request'  (duration: 250.23266ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:44:51.647770Z","caller":"traceutil/trace.go:171","msg":"trace[1022612325] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"299.333946ms","start":"2025-03-17T13:44:51.348412Z","end":"2025-03-17T13:44:51.647746Z","steps":["trace[1022612325] 'process raft request'  (duration: 299.178871ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:44:59 up 2 min,  0 users,  load average: 0.74, 0.29, 0.10
	Linux pause-880805 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a] <==
	I0317 13:44:38.553122       1 aggregator.go:171] initial CRD sync complete...
	I0317 13:44:38.553152       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 13:44:38.553159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 13:44:38.553164       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:44:38.570028       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 13:44:38.595125       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 13:44:38.595155       1 policy_source.go:240] refreshing policies
	I0317 13:44:38.636467       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:44:38.638737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 13:44:38.639317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 13:44:38.639363       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:44:38.641154       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:44:38.640209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 13:44:38.646650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:44:38.647946       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0317 13:44:38.654459       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0317 13:44:38.844728       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:44:39.447667       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 13:44:40.135558       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:44:40.176906       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 13:44:40.199915       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:44:40.205907       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:44:42.021685       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:44:42.069889       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:44:47.596883       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb] <==
	W0317 13:44:20.959147       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0317 13:44:20.960176       1 options.go:238] external host was not specified, using 192.168.39.171
	I0317 13:44:20.970225       1 server.go:143] Version: v1.32.2
	I0317 13:44:20.970319       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0317 13:44:21.775129       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:21.776896       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0317 13:44:21.782626       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0317 13:44:21.799676       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 13:44:21.810070       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0317 13:44:21.810114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0317 13:44:21.810439       1 instance.go:233] Using reconciler: lease
	W0317 13:44:21.811554       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.776327       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.777786       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.812576       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.242561       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.242662       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.441326       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.581906       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.676936       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.800227       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.573494       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.590311       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.771643       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234] <==
	
	
	==> kube-controller-manager [f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e] <==
	I0317 13:44:41.758413       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0317 13:44:41.760761       1 shared_informer.go:320] Caches are synced for crt configmap
	I0317 13:44:41.763105       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0317 13:44:41.764455       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 13:44:41.766734       1 shared_informer.go:320] Caches are synced for endpoint
	I0317 13:44:41.766806       1 shared_informer.go:320] Caches are synced for deployment
	I0317 13:44:41.766812       1 shared_informer.go:320] Caches are synced for disruption
	I0317 13:44:41.766886       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0317 13:44:41.767566       1 shared_informer.go:320] Caches are synced for job
	I0317 13:44:41.767927       1 shared_informer.go:320] Caches are synced for daemon sets
	I0317 13:44:41.768776       1 shared_informer.go:320] Caches are synced for attach detach
	I0317 13:44:41.769367       1 shared_informer.go:320] Caches are synced for stateful set
	I0317 13:44:41.771320       1 shared_informer.go:320] Caches are synced for TTL
	I0317 13:44:41.772636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0317 13:44:41.773846       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 13:44:41.775017       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0317 13:44:41.783275       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0317 13:44:41.786863       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0317 13:44:41.791135       1 shared_informer.go:320] Caches are synced for persistent volume
	I0317 13:44:41.791218       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:44:41.792475       1 shared_informer.go:320] Caches are synced for PV protection
	I0317 13:44:47.605102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="29.089822ms"
	I0317 13:44:47.605342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.251µs"
	I0317 13:44:47.627470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="21.729397ms"
	I0317 13:44:47.627856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="107.777µs"
	
	
	==> kube-proxy [0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:39.258901       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:44:39.266419       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0317 13:44:39.266611       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:44:39.295199       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:44:39.295235       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:44:39.295264       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:44:39.297423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:44:39.297688       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:44:39.297730       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:44:39.298923       1 config.go:199] "Starting service config controller"
	I0317 13:44:39.299027       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:44:39.299065       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:44:39.299083       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:44:39.299565       1 config.go:329] "Starting node config controller"
	I0317 13:44:39.299619       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:44:39.399887       1 shared_informer.go:320] Caches are synced for node config
	I0317 13:44:39.400061       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:44:39.400071       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523] <==
	I0317 13:44:21.824672       1 server_linux.go:66] "Using iptables proxy"
	E0317 13:44:21.857633       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:21.894314       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:32.740661       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-880805\": dial tcp 192.168.39.171:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.171:47880->192.168.39.171:8443: read: connection reset by peer"
	
	
	==> kube-scheduler [1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991] <==
	I0317 13:44:36.642606       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:44:38.530418       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:44:38.530456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 13:44:38.530466       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:44:38.530472       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:44:38.559247       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:44:38.559486       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:44:38.561894       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:44:38.562245       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:44:38.562334       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:44:38.562258       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:44:38.663118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9] <==
	I0317 13:44:22.217750       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:44:32.740808       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.171:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.171:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.171:47890->192.168.39.171:8443: read: connection reset by peer
	W0317 13:44:32.740838       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:44:32.740845       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:44:32.755084       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:44:32.755152       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0317 13:44:32.755181       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0317 13:44:32.757173       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:44:32.757247       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0317 13:44:32.757275       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0317 13:44:32.757573       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:44:32.757638       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:44:32.757796       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0317 13:44:32.757901       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0317 13:44:32.758276       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	I0317 13:44:32.758359       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0317 13:44:32.758460       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 17 13:44:36 pause-880805 kubelet[3305]: E0317 13:44:36.974262    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.974423    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.975260    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.976131    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.523716    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.613849    3305 kubelet_node_status.go:125] "Node was previously registered" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.613939    3305 kubelet_node_status.go:79] "Successfully registered node" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.614027    3305 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.615344    3305 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.660244    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-880805\" already exists" pod="kube-system/etcd-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.660289    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.670156    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-880805\" already exists" pod="kube-system/kube-apiserver-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.670199    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.678413    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-880805\" already exists" pod="kube-system/kube-controller-manager-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.678465    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.685908    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-880805\" already exists" pod="kube-system/kube-scheduler-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.805000    3305 apiserver.go:52] "Watching apiserver"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.823497    3305 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.840087    3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/735bd65e-41e7-48bc-b9c2-c6fdda988310-lib-modules\") pod \"kube-proxy-j6xzf\" (UID: \"735bd65e-41e7-48bc-b9c2-c6fdda988310\") " pod="kube-system/kube-proxy-j6xzf"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.840262    3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/735bd65e-41e7-48bc-b9c2-c6fdda988310-xtables-lock\") pod \"kube-proxy-j6xzf\" (UID: \"735bd65e-41e7-48bc-b9c2-c6fdda988310\") " pod="kube-system/kube-proxy-j6xzf"
	Mar 17 13:44:39 pause-880805 kubelet[3305]: I0317 13:44:39.109278    3305 scope.go:117] "RemoveContainer" containerID="43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523"
	Mar 17 13:44:44 pause-880805 kubelet[3305]: E0317 13:44:44.933524    3305 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219084932809517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:44 pause-880805 kubelet[3305]: E0317 13:44:44.933597    3305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219084932809517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:54 pause-880805 kubelet[3305]: E0317 13:44:54.936676    3305 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219094936188037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:54 pause-880805 kubelet[3305]: E0317 13:44:54.936713    3305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219094936188037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-880805 -n pause-880805
helpers_test.go:261: (dbg) Run:  kubectl --context pause-880805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-880805 -n pause-880805
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-880805 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-880805 logs -n 25: (1.302472841s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo docker                         | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo cat                            | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo                                | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo find                           | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-788750 sudo crio                           | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-788750                                     | cilium-788750             | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:43 UTC |
	| start   | -p cert-expiration-355456                            | cert-expiration-355456    | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-880805                                      | pause-880805              | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-662195                          | force-systemd-env-662195  | jenkins | v1.35.0 | 17 Mar 25 13:43 UTC | 17 Mar 25 13:44 UTC |
	| start   | -p force-systemd-flag-638911                         | force-systemd-flag-638911 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-312638                         | kubernetes-upgrade-312638 | jenkins | v1.35.0 | 17 Mar 25 13:44 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:44:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:44:29.845999  668474 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:44:29.846124  668474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:44:29.846134  668474 out.go:358] Setting ErrFile to fd 2...
	I0317 13:44:29.846141  668474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:44:29.846405  668474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:44:29.846954  668474 out.go:352] Setting JSON to false
	I0317 13:44:29.848113  668474 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12414,"bootTime":1742206656,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:44:29.848216  668474 start.go:139] virtualization: kvm guest
	I0317 13:44:29.850375  668474 out.go:177] * [kubernetes-upgrade-312638] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:44:29.851693  668474 notify.go:220] Checking for updates...
	I0317 13:44:29.851699  668474 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:44:29.852921  668474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:44:29.854069  668474 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:44:29.855302  668474 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:44:29.856589  668474 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:44:29.857772  668474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:44:29.859606  668474 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:44:29.860240  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:29.860329  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:29.876137  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0317 13:44:29.876654  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:29.877169  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:29.877192  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:29.877656  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:29.877859  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:29.878243  668474 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:44:29.878683  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:29.878732  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:29.894672  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0317 13:44:29.895269  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:29.895783  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:29.895820  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:29.896218  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:29.896377  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:29.937890  668474 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 13:44:29.939292  668474 start.go:297] selected driver: kvm2
	I0317 13:44:29.939314  668474 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:29.939430  668474 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:44:29.940675  668474 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:44:29.940798  668474 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:44:29.971270  668474 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:44:29.971935  668474 cni.go:84] Creating CNI manager for ""
	I0317 13:44:29.972001  668474 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:29.972062  668474 start.go:340] cluster config:
	{Name:kubernetes-upgrade-312638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-312638 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.55 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:29.972208  668474 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:44:29.974947  668474 out.go:177] * Starting "kubernetes-upgrade-312638" primary control-plane node in "kubernetes-upgrade-312638" cluster
	I0317 13:44:26.744778  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:26.745218  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:26.745238  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:26.745209  668239 retry.go:31] will retry after 1.580348632s: waiting for domain to come up
	I0317 13:44:28.326768  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:28.327396  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:28.327425  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:28.327370  668239 retry.go:31] will retry after 2.363365443s: waiting for domain to come up
	I0317 13:44:32.980491  667886 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc 0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234 bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9 43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523 8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2 52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b f27f490b29263ee5a59064e2ba91a6135334641b3c1d0e269eafcd9e8d51b8d4 8d94b340edcaf8330770907cb316e84af06980194bcedeac7a6a85ef9edfe908 3e06072ef41edf254b60c17c41c093da497170312a62382cabecb4114b593c5c: (11.632534719s)
	W0317 13:44:32.980595  667886 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc 0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234 bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9 43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523 8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2 52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b f27f490b29263ee5a59064e2ba91a6135334641b3c1d0e269eafcd9e8d51b8d4 8d94b340edcaf8330770907cb316e84af06980194bcedeac7a6a85ef9edfe908 3e06072ef41edf254b60c17c41c093da497170312a62382cabecb4114b593c5c: Process exited with status 1
	stdout:
	48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc
	0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234
	bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb
	e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9
	43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523
	8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2
	52134ed65163f0d1f9dc051343b71d0609599f2288e077011f105c79e9ca5d6c
	
	stderr:
	E0317 13:44:32.957310    3049 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": container with ID starting with 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b not found: ID does not exist" containerID="825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b"
	time="2025-03-17T13:44:32Z" level=fatal msg="stopping the container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": rpc error: code = NotFound desc = could not find container \"825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b\": container with ID starting with 825b74800cea36095287e67e1aea55a63cbd24250889498a833ff45f0353404b not found: ID does not exist"
	I0317 13:44:32.980675  667886 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:44:33.029861  667886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:44:33.039891  667886 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Mar 17 13:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Mar 17 13:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 17 13:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Mar 17 13:43 /etc/kubernetes/scheduler.conf
	
	I0317 13:44:33.039987  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:44:33.048987  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:44:33.057773  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:44:33.066431  667886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:44:33.066503  667886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:44:33.076083  667886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:44:33.085292  667886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:44:33.085359  667886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:44:33.094458  667886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:44:33.103384  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:33.159411  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.534259  667886 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.374781885s)
	I0317 13:44:34.534299  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.740030  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:34.810074  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:29.976251  668474 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:44:29.976305  668474 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:44:29.976319  668474 cache.go:56] Caching tarball of preloaded images
	I0317 13:44:29.976429  668474 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:44:29.976445  668474 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:44:29.976565  668474 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kubernetes-upgrade-312638/config.json ...
	I0317 13:44:29.976826  668474 start.go:360] acquireMachinesLock for kubernetes-upgrade-312638: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:44:30.692746  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:30.693214  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:30.693241  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:30.693181  668239 retry.go:31] will retry after 2.744285626s: waiting for domain to come up
	I0317 13:44:33.439543  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:33.440110  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:33.440166  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:33.440108  668239 retry.go:31] will retry after 3.306472858s: waiting for domain to come up
	I0317 13:44:34.914417  667886 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:44:34.914531  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:35.414658  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:35.477684  667886 api_server.go:72] duration metric: took 563.267944ms to wait for apiserver process to appear ...
	I0317 13:44:35.477711  667886 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:44:35.477773  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:35.478254  667886 api_server.go:269] stopped: https://192.168.39.171:8443/healthz: Get "https://192.168.39.171:8443/healthz": dial tcp 192.168.39.171:8443: connect: connection refused
	I0317 13:44:35.978463  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.508964  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:44:38.508993  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:44:38.509008  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.547367  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:44:38.547396  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:44:38.977973  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:38.982339  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:44:38.982368  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:44:39.478003  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:39.483545  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:44:39.483576  667886 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:44:39.978195  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:39.982238  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0317 13:44:39.988851  667886 api_server.go:141] control plane version: v1.32.2
	I0317 13:44:39.988879  667886 api_server.go:131] duration metric: took 4.511160464s to wait for apiserver health ...
	I0317 13:44:39.988891  667886 cni.go:84] Creating CNI manager for ""
	I0317 13:44:39.988901  667886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:39.990601  667886 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:44:36.750826  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:36.751362  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find current IP address of domain force-systemd-flag-638911 in network mk-force-systemd-flag-638911
	I0317 13:44:36.751417  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | I0317 13:44:36.751362  668239 retry.go:31] will retry after 4.311400463s: waiting for domain to come up
	I0317 13:44:42.380333  668474 start.go:364] duration metric: took 12.403452968s to acquireMachinesLock for "kubernetes-upgrade-312638"
	I0317 13:44:42.380400  668474 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:44:42.380409  668474 fix.go:54] fixHost starting: 
	I0317 13:44:42.380901  668474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:42.380960  668474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:42.400444  668474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0317 13:44:42.400877  668474 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:42.401298  668474 main.go:141] libmachine: Using API Version  1
	I0317 13:44:42.401321  668474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:42.401685  668474 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:42.401899  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	I0317 13:44:42.402042  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .GetState
	I0317 13:44:42.403739  668474 fix.go:112] recreateIfNeeded on kubernetes-upgrade-312638: state=Stopped err=<nil>
	I0317 13:44:42.403770  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .DriverName
	W0317 13:44:42.403925  668474 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:44:42.405416  668474 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-312638" ...
	I0317 13:44:41.065610  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.066281  668126 main.go:141] libmachine: (force-systemd-flag-638911) found domain IP: 192.168.61.182
	I0317 13:44:41.066308  668126 main.go:141] libmachine: (force-systemd-flag-638911) reserving static IP address...
	I0317 13:44:41.066325  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has current primary IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.066785  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-638911", mac: "52:54:00:22:7e:2e", ip: "192.168.61.182"} in network mk-force-systemd-flag-638911
	I0317 13:44:41.146339  668126 main.go:141] libmachine: (force-systemd-flag-638911) reserved static IP address 192.168.61.182 for domain force-systemd-flag-638911
	I0317 13:44:41.146375  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Getting to WaitForSSH function...
	I0317 13:44:41.146384  668126 main.go:141] libmachine: (force-systemd-flag-638911) waiting for SSH...
	I0317 13:44:41.149162  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.149689  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.149722  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.149869  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Using SSH client type: external
	I0317 13:44:41.149911  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa (-rw-------)
	I0317 13:44:41.149951  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:44:41.149964  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | About to run SSH command:
	I0317 13:44:41.149999  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | exit 0
	I0317 13:44:41.271224  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | SSH cmd err, output: <nil>: 
	I0317 13:44:41.271491  668126 main.go:141] libmachine: (force-systemd-flag-638911) KVM machine creation complete
	I0317 13:44:41.271854  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetConfigRaw
	I0317 13:44:41.272413  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:41.272588  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:41.272743  668126 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:44:41.272758  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetState
	I0317 13:44:41.274169  668126 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:44:41.274183  668126 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:44:41.274188  668126 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:44:41.274194  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.276641  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.277049  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.277080  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.277234  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.277429  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.277596  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.277762  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.277930  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.278192  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.278203  668126 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:44:41.378705  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:44:41.378740  668126 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:44:41.378753  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.381523  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.381922  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.381956  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.382179  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.382369  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.382549  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.382695  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.382924  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.383187  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.383202  668126 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:44:41.487953  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:44:41.488057  668126 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:44:41.488071  668126 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:44:41.488087  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.488390  668126 buildroot.go:166] provisioning hostname "force-systemd-flag-638911"
	I0317 13:44:41.488424  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.488639  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.491049  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.491428  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.491458  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.491666  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.491815  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.491984  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.492163  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.492335  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.492588  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.492601  668126 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-638911 && echo "force-systemd-flag-638911" | sudo tee /etc/hostname
	I0317 13:44:41.604607  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-638911
	
	I0317 13:44:41.604640  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.607102  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.607434  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.607484  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.607721  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.607912  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.608084  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.608193  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.608368  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.608558  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.608573  668126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-638911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-638911/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-638911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:44:41.720265  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:44:41.720370  668126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:44:41.720423  668126 buildroot.go:174] setting up certificates
	I0317 13:44:41.720439  668126 provision.go:84] configureAuth start
	I0317 13:44:41.720461  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetMachineName
	I0317 13:44:41.720826  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:41.723694  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.724080  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.724120  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.724347  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.726962  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.727372  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.727403  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.727624  668126 provision.go:143] copyHostCerts
	I0317 13:44:41.727665  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:44:41.727700  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:44:41.727712  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:44:41.727765  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:44:41.727840  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:44:41.727857  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:44:41.727864  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:44:41.727883  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:44:41.727925  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:44:41.727941  668126 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:44:41.727947  668126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:44:41.727965  668126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:44:41.728011  668126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-638911 san=[127.0.0.1 192.168.61.182 force-systemd-flag-638911 localhost minikube]
	I0317 13:44:41.762446  668126 provision.go:177] copyRemoteCerts
	I0317 13:44:41.762499  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:44:41.762525  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.765386  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.765790  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.765819  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.765956  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.766150  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.766351  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.766519  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:41.849046  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0317 13:44:41.849120  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:44:41.872224  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0317 13:44:41.872293  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:44:41.895163  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0317 13:44:41.895241  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:44:41.917926  668126 provision.go:87] duration metric: took 197.472972ms to configureAuth
	I0317 13:44:41.917949  668126 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:44:41.918131  668126 config.go:182] Loaded profile config "force-systemd-flag-638911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:41.918216  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:41.920667  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.921020  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:41.921052  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:41.921307  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:41.921507  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.921665  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:41.921772  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:41.921894  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:41.922104  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:41.922124  668126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:44:42.134089  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:44:42.134123  668126 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:44:42.134132  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetURL
	I0317 13:44:42.135603  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | using libvirt version 6000000
	I0317 13:44:42.138088  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.138439  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.138461  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.138660  668126 main.go:141] libmachine: Docker is up and running!
	I0317 13:44:42.138678  668126 main.go:141] libmachine: Reticulating splines...
	I0317 13:44:42.138685  668126 client.go:171] duration metric: took 24.91957911s to LocalClient.Create
	I0317 13:44:42.138711  668126 start.go:167] duration metric: took 24.919645187s to libmachine.API.Create "force-systemd-flag-638911"
	I0317 13:44:42.138722  668126 start.go:293] postStartSetup for "force-systemd-flag-638911" (driver="kvm2")
	I0317 13:44:42.138731  668126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:44:42.138746  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.138991  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:44:42.139018  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.141026  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.141420  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.141445  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.141556  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.141744  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.141878  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.142005  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.222345  668126 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:44:42.226512  668126 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:44:42.226537  668126 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:44:42.226614  668126 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:44:42.226729  668126 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:44:42.226745  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> /etc/ssl/certs/6291882.pem
	I0317 13:44:42.226870  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:44:42.236270  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:42.263300  668126 start.go:296] duration metric: took 124.564546ms for postStartSetup
	I0317 13:44:42.263356  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetConfigRaw
	I0317 13:44:42.263989  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:42.266671  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.267069  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.267102  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.267400  668126 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/config.json ...
	I0317 13:44:42.267647  668126 start.go:128] duration metric: took 25.223324534s to createHost
	I0317 13:44:42.267674  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.270166  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.270519  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.270554  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.270745  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.270979  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.271150  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.271339  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.271570  668126 main.go:141] libmachine: Using SSH client type: native
	I0317 13:44:42.271862  668126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I0317 13:44:42.271876  668126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:44:42.380114  668126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219082.348913716
	
	I0317 13:44:42.380150  668126 fix.go:216] guest clock: 1742219082.348913716
	I0317 13:44:42.380171  668126 fix.go:229] Guest: 2025-03-17 13:44:42.348913716 +0000 UTC Remote: 2025-03-17 13:44:42.267660829 +0000 UTC m=+41.869578656 (delta=81.252887ms)
	I0317 13:44:42.380203  668126 fix.go:200] guest clock delta is within tolerance: 81.252887ms
	I0317 13:44:42.380212  668126 start.go:83] releasing machines lock for "force-systemd-flag-638911", held for 25.336097901s
	I0317 13:44:42.380264  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.380567  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:42.383699  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.384167  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.384199  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.384397  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385051  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385250  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:42.385330  668126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:44:42.385378  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.385499  668126 ssh_runner.go:195] Run: cat /version.json
	I0317 13:44:42.385525  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:42.388428  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388498  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388870  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.388894  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.388918  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:42.388931  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:42.389214  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.389318  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:42.389402  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.389470  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:42.389583  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.389655  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:42.389736  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.389815  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:42.493984  668126 ssh_runner.go:195] Run: systemctl --version
	I0317 13:44:42.500108  668126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:44:42.657860  668126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:44:42.664405  668126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:44:42.664469  668126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:44:42.679993  668126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:44:42.680016  668126 start.go:495] detecting cgroup driver to use...
	I0317 13:44:42.680031  668126 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0317 13:44:42.680075  668126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:44:42.699983  668126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:44:42.714052  668126 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:44:42.714111  668126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:44:42.727777  668126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:44:42.741547  668126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:44:42.859998  668126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:44:43.002917  668126 docker.go:233] disabling docker service ...
	I0317 13:44:43.002996  668126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:44:43.017621  668126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:44:43.032372  668126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:44:43.182125  668126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:44:43.318786  668126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:44:43.333035  668126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:44:43.351083  668126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:44:43.351160  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.361576  668126 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0317 13:44:43.361644  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.371685  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.381559  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.396859  668126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:44:43.409659  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.423207  668126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.442208  668126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:44:43.453824  668126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:44:43.463122  668126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:44:43.463188  668126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:44:43.475476  668126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:44:43.490099  668126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:43.620896  668126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:44:43.723544  668126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:44:43.723626  668126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:44:43.728420  668126 start.go:563] Will wait 60s for crictl version
	I0317 13:44:43.728465  668126 ssh_runner.go:195] Run: which crictl
	I0317 13:44:43.731806  668126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:44:43.767301  668126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:44:43.767405  668126 ssh_runner.go:195] Run: crio --version
	I0317 13:44:43.794226  668126 ssh_runner.go:195] Run: crio --version
	I0317 13:44:43.821952  668126 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:44:39.991881  667886 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:44:40.014401  667886 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:44:40.034790  667886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:44:40.039178  667886 system_pods.go:59] 6 kube-system pods found
	I0317 13:44:40.039213  667886 system_pods.go:61] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:40.039226  667886 system_pods.go:61] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:44:40.039235  667886 system_pods.go:61] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:44:40.039247  667886 system_pods.go:61] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:44:40.039254  667886 system_pods.go:61] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:40.039265  667886 system_pods.go:61] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:44:40.039278  667886 system_pods.go:74] duration metric: took 4.46195ms to wait for pod list to return data ...
	I0317 13:44:40.039299  667886 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:44:40.041945  667886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:44:40.041969  667886 node_conditions.go:123] node cpu capacity is 2
	I0317 13:44:40.041980  667886 node_conditions.go:105] duration metric: took 2.673013ms to run NodePressure ...
	I0317 13:44:40.041996  667886 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:44:40.312302  667886 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0317 13:44:40.314905  667886 kubeadm.go:739] kubelet initialised
	I0317 13:44:40.314928  667886 kubeadm.go:740] duration metric: took 2.598978ms waiting for restarted kubelet to initialise ...
	I0317 13:44:40.314939  667886 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:40.317212  667886 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:40.321121  667886 pod_ready.go:93] pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:40.321141  667886 pod_ready.go:82] duration metric: took 3.900997ms for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:40.321149  667886 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:42.327995  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:44.328639  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:42.406502  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Calling .Start
	I0317 13:44:42.406680  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) starting domain...
	I0317 13:44:42.406714  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) ensuring networks are active...
	I0317 13:44:42.407490  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network default is active
	I0317 13:44:42.407936  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) Ensuring network mk-kubernetes-upgrade-312638 is active
	I0317 13:44:42.408345  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) getting domain XML...
	I0317 13:44:42.409190  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) creating domain...
	I0317 13:44:43.726079  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) waiting for IP...
	I0317 13:44:43.727131  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.727721  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.727807  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:43.727700  668579 retry.go:31] will retry after 254.488886ms: waiting for domain to come up
	I0317 13:44:43.984328  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.984791  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:43.984850  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:43.984768  668579 retry.go:31] will retry after 259.583433ms: waiting for domain to come up
	I0317 13:44:44.246322  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.247055  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.247089  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:44.247017  668579 retry.go:31] will retry after 385.8999ms: waiting for domain to come up
	I0317 13:44:44.634847  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.635476  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:44.635508  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:44.635445  668579 retry.go:31] will retry after 413.669683ms: waiting for domain to come up
	I0317 13:44:43.823417  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetIP
	I0317 13:44:43.826970  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:43.827552  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:43.827581  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:43.827873  668126 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0317 13:44:43.831824  668126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:44:43.844263  668126 kubeadm.go:883] updating cluster {Name:force-systemd-flag-638911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:forc
e-systemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:44:43.844358  668126 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:44:43.844415  668126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:43.874831  668126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:44:43.874901  668126 ssh_runner.go:195] Run: which lz4
	I0317 13:44:43.878477  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0317 13:44:43.878601  668126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:44:43.882516  668126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:44:43.882549  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:44:45.138506  668126 crio.go:462] duration metric: took 1.259947616s to copy over tarball
	I0317 13:44:45.138603  668126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:44:46.827476  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:49.418342  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:45.051026  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.051634  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.051665  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:45.051597  668579 retry.go:31] will retry after 723.318576ms: waiting for domain to come up
	I0317 13:44:45.776707  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.777269  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:45.777303  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:45.777198  668579 retry.go:31] will retry after 724.270735ms: waiting for domain to come up
	I0317 13:44:46.503036  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:46.503704  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:46.503726  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:46.503671  668579 retry.go:31] will retry after 992.581309ms: waiting for domain to come up
	I0317 13:44:47.498301  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:47.498798  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:47.498826  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:47.498768  668579 retry.go:31] will retry after 1.30814635s: waiting for domain to come up
	I0317 13:44:48.808842  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:48.809343  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:48.809402  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:48.809312  668579 retry.go:31] will retry after 1.844453207s: waiting for domain to come up
	I0317 13:44:47.418336  668126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279689899s)
	I0317 13:44:47.418390  668126 crio.go:469] duration metric: took 2.279841917s to extract the tarball
	I0317 13:44:47.418402  668126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:44:47.456429  668126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:44:47.502560  668126 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:44:47.502587  668126 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:44:47.502597  668126 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.32.2 crio true true} ...
	I0317 13:44:47.502719  668126 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-638911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:force-systemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:44:47.502800  668126 ssh_runner.go:195] Run: crio config
	I0317 13:44:47.553965  668126 cni.go:84] Creating CNI manager for ""
	I0317 13:44:47.553988  668126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:47.554001  668126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:44:47.554029  668126 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-638911 NodeName:force-systemd-flag-638911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:44:47.554191  668126 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-638911"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:44:47.554272  668126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:44:47.563769  668126 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:44:47.563848  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:44:47.572766  668126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0317 13:44:47.588068  668126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:44:47.605899  668126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2304 bytes)
	I0317 13:44:47.623593  668126 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I0317 13:44:47.627413  668126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:44:47.639758  668126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:47.789329  668126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:47.805998  668126 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911 for IP: 192.168.61.182
	I0317 13:44:47.806030  668126 certs.go:194] generating shared ca certs ...
	I0317 13:44:47.806053  668126 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:47.806291  668126 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:44:47.806366  668126 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:44:47.806383  668126 certs.go:256] generating profile certs ...
	I0317 13:44:47.806464  668126 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key
	I0317 13:44:47.806502  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt with IP's: []
	I0317 13:44:47.999886  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt ...
	I0317 13:44:47.999920  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt: {Name:mk611b0bbba778e4de9b41db564bb4b16aaed850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.000097  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key ...
	I0317 13:44:48.000111  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key: {Name:mk0b06d3168bd784bc71540e69b8e94432e272e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.000192  668126 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8
	I0317 13:44:48.000207  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.182]
	I0317 13:44:48.182682  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 ...
	I0317 13:44:48.182727  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8: {Name:mk55bd44ee31dabbe68f2fd171c30c67905f1132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.182937  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8 ...
	I0317 13:44:48.182958  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8: {Name:mkfc54bc38333888b307dbf401857e83a3257d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.183066  668126 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt.d492adc8 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt
	I0317 13:44:48.183162  668126 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key.d492adc8 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key
	I0317 13:44:48.183241  668126 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key
	I0317 13:44:48.183264  668126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt with IP's: []
	I0317 13:44:48.555133  668126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt ...
	I0317 13:44:48.555169  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt: {Name:mkd271a39c3d669ae2c876cd1c996b14968810ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.555383  668126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key ...
	I0317 13:44:48.555409  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key: {Name:mkdd41f47afee4b3c05d7b36f24cfff859415a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:48.555526  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 13:44:48.555576  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0317 13:44:48.555592  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 13:44:48.555605  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 13:44:48.555617  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 13:44:48.555630  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 13:44:48.555641  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 13:44:48.555654  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 13:44:48.555706  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:44:48.555741  668126 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:44:48.555751  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:44:48.555771  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:44:48.555794  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:44:48.555817  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:44:48.555854  668126 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:44:48.555881  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.555896  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.555908  668126 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem -> /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.556419  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:44:48.588045  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:44:48.612710  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:44:48.654612  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:44:48.678360  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:44:48.701390  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:44:48.727299  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:44:48.751701  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:44:48.774711  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:44:48.797699  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:44:48.820688  668126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:44:48.844527  668126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:44:48.861291  668126 ssh_runner.go:195] Run: openssl version
	I0317 13:44:48.866773  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:44:48.877330  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.882084  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.882152  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:44:48.887901  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:44:48.899086  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:44:48.909739  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.915543  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.915605  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:44:48.924887  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:44:48.938140  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:44:48.948748  668126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.953337  668126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.953393  668126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:44:48.959115  668126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:44:48.970225  668126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:44:48.974319  668126 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:44:48.974380  668126 kubeadm.go:392] StartCluster: {Name:force-systemd-flag-638911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:force-s
ystemd-flag-638911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:44:48.974481  668126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:44:48.974542  668126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:44:49.012180  668126 cri.go:89] found id: ""
	I0317 13:44:49.012279  668126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:44:49.023049  668126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:44:49.034835  668126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:44:49.046524  668126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:44:49.046555  668126 kubeadm.go:157] found existing configuration files:
	
	I0317 13:44:49.046604  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:44:49.056595  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:44:49.056659  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:44:49.066849  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:44:49.075875  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:44:49.075942  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:44:49.085012  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:44:49.094603  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:44:49.094682  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:44:49.104087  668126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:44:49.113518  668126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:44:49.113583  668126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:44:49.125534  668126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:44:49.338474  668126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:44:51.828302  667886 pod_ready.go:103] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"False"
	I0317 13:44:52.828238  667886 pod_ready.go:93] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.828262  667886 pod_ready.go:82] duration metric: took 12.507107151s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.828278  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.833008  667886 pod_ready.go:93] pod "kube-apiserver-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.833036  667886 pod_ready.go:82] duration metric: took 4.749094ms for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.833052  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.837249  667886 pod_ready.go:93] pod "kube-controller-manager-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.837274  667886 pod_ready.go:82] duration metric: took 4.21448ms for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.837283  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.841206  667886 pod_ready.go:93] pod "kube-proxy-j6xzf" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:52.841229  667886 pod_ready.go:82] duration metric: took 3.938504ms for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:52.841241  667886 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:50.655076  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:50.655609  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:50.655639  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:50.655581  668579 retry.go:31] will retry after 1.885660977s: waiting for domain to come up
	I0317 13:44:52.543156  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:52.543812  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:52.543867  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:52.543756  668579 retry.go:31] will retry after 2.68611123s: waiting for domain to come up
	I0317 13:44:54.847821  667886 pod_ready.go:93] pod "kube-scheduler-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:54.847920  667886 pod_ready.go:82] duration metric: took 2.006666341s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:54.847948  667886 pod_ready.go:39] duration metric: took 14.53299614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:54.847998  667886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:44:54.863335  667886 ops.go:34] apiserver oom_adj: -16
	I0317 13:44:54.863426  667886 kubeadm.go:597] duration metric: took 33.615812738s to restartPrimaryControlPlane
	I0317 13:44:54.863453  667886 kubeadm.go:394] duration metric: took 34.056168573s to StartCluster
	I0317 13:44:54.863501  667886 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:54.863601  667886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:44:54.864842  667886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:54.865137  667886 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:44:54.865408  667886 config.go:182] Loaded profile config "pause-880805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:54.865433  667886 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:44:54.866805  667886 out.go:177] * Enabled addons: 
	I0317 13:44:54.866812  667886 out.go:177] * Verifying Kubernetes components...
	I0317 13:44:54.868082  667886 addons.go:514] duration metric: took 2.655448ms for enable addons: enabled=[]
	I0317 13:44:54.868155  667886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:55.046597  667886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:55.062303  667886 node_ready.go:35] waiting up to 6m0s for node "pause-880805" to be "Ready" ...
	I0317 13:44:55.065238  667886 node_ready.go:49] node "pause-880805" has status "Ready":"True"
	I0317 13:44:55.065263  667886 node_ready.go:38] duration metric: took 2.908667ms for node "pause-880805" to be "Ready" ...
	I0317 13:44:55.065274  667886 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:55.067875  667886 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.226383  667886 pod_ready.go:93] pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:55.226412  667886 pod_ready.go:82] duration metric: took 158.507648ms for pod "coredns-668d6bf9bc-nttjk" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.226423  667886 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.625495  667886 pod_ready.go:93] pod "etcd-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:55.625527  667886 pod_ready.go:82] duration metric: took 399.096546ms for pod "etcd-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:55.625540  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.026020  667886 pod_ready.go:93] pod "kube-apiserver-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.026044  667886 pod_ready.go:82] duration metric: took 400.496433ms for pod "kube-apiserver-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.026055  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.425541  667886 pod_ready.go:93] pod "kube-controller-manager-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.425566  667886 pod_ready.go:82] duration metric: took 399.504164ms for pod "kube-controller-manager-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.425575  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.825728  667886 pod_ready.go:93] pod "kube-proxy-j6xzf" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:56.825764  667886 pod_ready.go:82] duration metric: took 400.180922ms for pod "kube-proxy-j6xzf" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:56.825779  667886 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:57.225623  667886 pod_ready.go:93] pod "kube-scheduler-pause-880805" in "kube-system" namespace has status "Ready":"True"
	I0317 13:44:57.225655  667886 pod_ready.go:82] duration metric: took 399.866844ms for pod "kube-scheduler-pause-880805" in "kube-system" namespace to be "Ready" ...
	I0317 13:44:57.225666  667886 pod_ready.go:39] duration metric: took 2.160376658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:44:57.225686  667886 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:44:57.225752  667886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:44:57.240457  667886 api_server.go:72] duration metric: took 2.375250516s to wait for apiserver process to appear ...
	I0317 13:44:57.240489  667886 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:44:57.240508  667886 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0317 13:44:57.244616  667886 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0317 13:44:57.245643  667886 api_server.go:141] control plane version: v1.32.2
	I0317 13:44:57.245670  667886 api_server.go:131] duration metric: took 5.173045ms to wait for apiserver health ...
	I0317 13:44:57.245681  667886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:44:57.426533  667886 system_pods.go:59] 6 kube-system pods found
	I0317 13:44:57.426574  667886 system_pods.go:61] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:57.426585  667886 system_pods.go:61] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running
	I0317 13:44:57.426592  667886 system_pods.go:61] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running
	I0317 13:44:57.426597  667886 system_pods.go:61] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running
	I0317 13:44:57.426603  667886 system_pods.go:61] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:57.426611  667886 system_pods.go:61] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running
	I0317 13:44:57.426619  667886 system_pods.go:74] duration metric: took 180.929907ms to wait for pod list to return data ...
	I0317 13:44:57.426629  667886 default_sa.go:34] waiting for default service account to be created ...
	I0317 13:44:57.626004  667886 default_sa.go:45] found service account: "default"
	I0317 13:44:57.626039  667886 default_sa.go:55] duration metric: took 199.401959ms for default service account to be created ...
	I0317 13:44:57.626052  667886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 13:44:57.826047  667886 system_pods.go:86] 6 kube-system pods found
	I0317 13:44:57.826090  667886 system_pods.go:89] "coredns-668d6bf9bc-nttjk" [b185d851-a2d4-4a9f-a30b-26d34b39beeb] Running
	I0317 13:44:57.826097  667886 system_pods.go:89] "etcd-pause-880805" [4fc616fb-91ce-443f-a9a5-1ab37a052d19] Running
	I0317 13:44:57.826105  667886 system_pods.go:89] "kube-apiserver-pause-880805" [d440f7c0-a631-4145-80f8-f4e50ed71084] Running
	I0317 13:44:57.826110  667886 system_pods.go:89] "kube-controller-manager-pause-880805" [32c3bc6f-471f-415a-84e8-dd540d5c6023] Running
	I0317 13:44:57.826114  667886 system_pods.go:89] "kube-proxy-j6xzf" [735bd65e-41e7-48bc-b9c2-c6fdda988310] Running
	I0317 13:44:57.826119  667886 system_pods.go:89] "kube-scheduler-pause-880805" [a8237c89-96b9-478c-aed0-113dc4e3b1dc] Running
	I0317 13:44:57.826129  667886 system_pods.go:126] duration metric: took 200.069419ms to wait for k8s-apps to be running ...
	I0317 13:44:57.826138  667886 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 13:44:57.826201  667886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:44:57.843116  667886 system_svc.go:56] duration metric: took 16.963351ms WaitForService to wait for kubelet
	I0317 13:44:57.843159  667886 kubeadm.go:582] duration metric: took 2.977959339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:44:57.843184  667886 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:44:58.025516  667886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:44:58.025547  667886 node_conditions.go:123] node cpu capacity is 2
	I0317 13:44:58.025559  667886 node_conditions.go:105] duration metric: took 182.370464ms to run NodePressure ...
	I0317 13:44:58.025573  667886 start.go:241] waiting for startup goroutines ...
	I0317 13:44:58.025579  667886 start.go:246] waiting for cluster config update ...
	I0317 13:44:58.025588  667886 start.go:255] writing updated cluster config ...
	I0317 13:44:58.025922  667886 ssh_runner.go:195] Run: rm -f paused
	I0317 13:44:58.078147  667886 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:44:58.080186  667886 out.go:177] * Done! kubectl is now configured to use "pause-880805" cluster and "default" namespace by default
	I0317 13:44:59.239043  668126 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:44:59.239113  668126 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:44:59.239197  668126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:44:59.239359  668126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:44:59.239510  668126 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:44:59.239617  668126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:44:59.241178  668126 out.go:235]   - Generating certificates and keys ...
	I0317 13:44:59.241290  668126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:44:59.241380  668126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:44:59.241477  668126 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:44:59.241572  668126 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:44:59.241667  668126 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:44:59.241741  668126 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:44:59.241830  668126 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:44:59.242003  668126 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-638911 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	I0317 13:44:59.242075  668126 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:44:59.242252  668126 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-638911 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	I0317 13:44:59.242341  668126 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:44:59.242395  668126 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:44:59.242431  668126 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:44:59.242475  668126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:44:59.242532  668126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:44:59.242610  668126 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:44:59.242666  668126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:44:59.242746  668126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:44:59.242818  668126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:44:59.242923  668126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:44:59.243015  668126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:44:59.244855  668126 out.go:235]   - Booting up control plane ...
	I0317 13:44:59.244978  668126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:44:59.245039  668126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:44:59.245124  668126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:44:59.245251  668126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:44:59.245355  668126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:44:59.245416  668126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:44:59.245607  668126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:44:59.245762  668126 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:44:59.245851  668126 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001841505s
	I0317 13:44:59.245959  668126 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:44:59.246051  668126 kubeadm.go:310] [api-check] The API server is healthy after 5.001478686s
	I0317 13:44:59.246184  668126 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:44:59.246379  668126 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:44:59.246471  668126 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:44:59.246731  668126 kubeadm.go:310] [mark-control-plane] Marking the node force-systemd-flag-638911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:44:59.246842  668126 kubeadm.go:310] [bootstrap-token] Using token: jx65xq.cl8n4b1ijnuwd3gm
	I0317 13:44:59.248223  668126 out.go:235]   - Configuring RBAC rules ...
	I0317 13:44:59.248358  668126 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:44:59.248467  668126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:44:59.248656  668126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:44:59.248819  668126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:44:59.248962  668126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:44:59.249073  668126 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:44:59.249174  668126 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:44:59.249214  668126 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:44:59.249254  668126 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:44:59.249260  668126 kubeadm.go:310] 
	I0317 13:44:59.249314  668126 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:44:59.249321  668126 kubeadm.go:310] 
	I0317 13:44:59.249420  668126 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:44:59.249430  668126 kubeadm.go:310] 
	I0317 13:44:59.249465  668126 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:44:59.249554  668126 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:44:59.249628  668126 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:44:59.249637  668126 kubeadm.go:310] 
	I0317 13:44:59.249716  668126 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:44:59.249725  668126 kubeadm.go:310] 
	I0317 13:44:59.249764  668126 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:44:59.249770  668126 kubeadm.go:310] 
	I0317 13:44:59.249831  668126 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:44:59.249917  668126 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:44:59.249981  668126 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:44:59.249989  668126 kubeadm.go:310] 
	I0317 13:44:59.250070  668126 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:44:59.250141  668126 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:44:59.250147  668126 kubeadm.go:310] 
	I0317 13:44:59.250226  668126 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jx65xq.cl8n4b1ijnuwd3gm \
	I0317 13:44:59.250319  668126 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:44:59.250341  668126 kubeadm.go:310] 	--control-plane 
	I0317 13:44:59.250345  668126 kubeadm.go:310] 
	I0317 13:44:59.250414  668126 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:44:59.250420  668126 kubeadm.go:310] 
	I0317 13:44:59.250506  668126 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jx65xq.cl8n4b1ijnuwd3gm \
	I0317 13:44:59.250627  668126 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:44:59.250649  668126 cni.go:84] Creating CNI manager for ""
	I0317 13:44:59.250657  668126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:44:59.252189  668126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:44:59.253579  668126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:44:59.266562  668126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:44:59.283772  668126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:44:59.283855  668126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:44:59.283871  668126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-flag-638911 minikube.k8s.io/updated_at=2025_03_17T13_44_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=force-systemd-flag-638911 minikube.k8s.io/primary=true
	I0317 13:44:59.308815  668126 ops.go:34] apiserver oom_adj: -16
	I0317 13:44:59.536071  668126 kubeadm.go:1113] duration metric: took 252.279457ms to wait for elevateKubeSystemPrivileges
	I0317 13:44:59.536113  668126 kubeadm.go:394] duration metric: took 10.561738839s to StartCluster
	I0317 13:44:59.536140  668126 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:59.536231  668126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:44:59.537847  668126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:44:59.538080  668126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:44:59.538107  668126 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:44:59.538169  668126 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:44:59.538271  668126 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-flag-638911"
	I0317 13:44:59.538301  668126 addons.go:238] Setting addon storage-provisioner=true in "force-systemd-flag-638911"
	I0317 13:44:59.538339  668126 host.go:66] Checking if "force-systemd-flag-638911" exists ...
	I0317 13:44:59.538362  668126 config.go:182] Loaded profile config "force-systemd-flag-638911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:44:59.538399  668126 addons.go:69] Setting default-storageclass=true in profile "force-systemd-flag-638911"
	I0317 13:44:59.538427  668126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-638911"
	I0317 13:44:59.538802  668126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:59.538828  668126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:59.538845  668126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:59.538856  668126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:59.539832  668126 out.go:177] * Verifying Kubernetes components...
	I0317 13:44:59.541334  668126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:44:59.555402  668126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0317 13:44:59.555918  668126 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:59.556442  668126 main.go:141] libmachine: Using API Version  1
	I0317 13:44:59.556470  668126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:59.556924  668126 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:59.557115  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetState
	I0317 13:44:59.560267  668126 kapi.go:59] client config for force-systemd-flag-638911: &rest.Config{Host:"https://192.168.61.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt", KeyFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key", CAFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:44:59.560592  668126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0317 13:44:59.560832  668126 cert_rotation.go:140] Starting client certificate rotation controller
	I0317 13:44:59.560836  668126 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:44:59.560901  668126 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:44:59.560918  668126 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:44:59.560925  668126 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:44:59.561004  668126 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:59.561355  668126 addons.go:238] Setting addon default-storageclass=true in "force-systemd-flag-638911"
	I0317 13:44:59.561400  668126 host.go:66] Checking if "force-systemd-flag-638911" exists ...
	I0317 13:44:59.561447  668126 main.go:141] libmachine: Using API Version  1
	I0317 13:44:59.561466  668126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:59.561793  668126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:59.561802  668126 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:59.561826  668126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:59.562231  668126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:59.562257  668126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:59.577298  668126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0317 13:44:59.577903  668126 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:59.578467  668126 main.go:141] libmachine: Using API Version  1
	I0317 13:44:59.578501  668126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:59.578585  668126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41967
	I0317 13:44:59.578898  668126 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:59.579169  668126 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:59.579555  668126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:44:59.579588  668126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:44:59.579680  668126 main.go:141] libmachine: Using API Version  1
	I0317 13:44:59.579705  668126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:59.580081  668126 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:59.580458  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetState
	I0317 13:44:59.582470  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:59.584840  668126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:44:55.232518  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:55.233059  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:55.233116  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:55.233018  668579 retry.go:31] will retry after 3.20625612s: waiting for domain to come up
	I0317 13:44:58.440740  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | domain kubernetes-upgrade-312638 has defined MAC address 52:54:00:2a:ac:41 in network mk-kubernetes-upgrade-312638
	I0317 13:44:58.441257  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | unable to find current IP address of domain kubernetes-upgrade-312638 in network mk-kubernetes-upgrade-312638
	I0317 13:44:58.441311  668474 main.go:141] libmachine: (kubernetes-upgrade-312638) DBG | I0317 13:44:58.441228  668579 retry.go:31] will retry after 2.938986663s: waiting for domain to come up
	I0317 13:44:59.586689  668126 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:44:59.586706  668126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:44:59.586725  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:59.589871  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:59.590246  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:59.590269  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:59.590560  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:59.590754  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:59.590930  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:59.591065  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:59.598560  668126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0317 13:44:59.599025  668126 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:44:59.599385  668126 main.go:141] libmachine: Using API Version  1
	I0317 13:44:59.599396  668126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:44:59.599801  668126 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:44:59.599943  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetState
	I0317 13:44:59.601499  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .DriverName
	I0317 13:44:59.601715  668126 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:44:59.601728  668126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:44:59.601742  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHHostname
	I0317 13:44:59.604327  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:59.604685  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:7e:2e", ip: ""} in network mk-force-systemd-flag-638911: {Iface:virbr3 ExpiryTime:2025-03-17 14:44:32 +0000 UTC Type:0 Mac:52:54:00:22:7e:2e Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:force-systemd-flag-638911 Clientid:01:52:54:00:22:7e:2e}
	I0317 13:44:59.604702  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | domain force-systemd-flag-638911 has defined IP address 192.168.61.182 and MAC address 52:54:00:22:7e:2e in network mk-force-systemd-flag-638911
	I0317 13:44:59.604878  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHPort
	I0317 13:44:59.605050  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHKeyPath
	I0317 13:44:59.605271  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .GetSSHUsername
	I0317 13:44:59.605440  668126 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/force-systemd-flag-638911/id_rsa Username:docker}
	I0317 13:44:59.757468  668126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:44:59.809744  668126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:44:59.978984  668126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:45:00.047635  668126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:45:00.313895  668126 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0317 13:45:00.314948  668126 kapi.go:59] client config for force-systemd-flag-638911: &rest.Config{Host:"https://192.168.61.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt", KeyFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key", CAFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:45:00.315786  668126 main.go:141] libmachine: Making call to close driver server
	I0317 13:45:00.315809  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .Close
	I0317 13:45:00.316503  668126 kapi.go:59] client config for force-systemd-flag-638911: &rest.Config{Host:"https://192.168.61.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.crt", KeyFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/profiles/force-systemd-flag-638911/client.key", CAFile:"/home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:45:00.316826  668126 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:45:00.316888  668126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:45:00.318706  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Closing plugin on server side
	I0317 13:45:00.318725  668126 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:45:00.318740  668126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:45:00.318749  668126 main.go:141] libmachine: Making call to close driver server
	I0317 13:45:00.318764  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .Close
	I0317 13:45:00.321852  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Closing plugin on server side
	I0317 13:45:00.321982  668126 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:45:00.322008  668126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:45:00.337205  668126 main.go:141] libmachine: Making call to close driver server
	I0317 13:45:00.337229  668126 main.go:141] libmachine: (force-systemd-flag-638911) Calling .Close
	I0317 13:45:00.337636  668126 main.go:141] libmachine: (force-systemd-flag-638911) DBG | Closing plugin on server side
	I0317 13:45:00.337681  668126 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:45:00.337689  668126 main.go:141] libmachine: Making call to close connection to plugin binary
	
	
	==> CRI-O <==
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.854384543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219100854273144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de23cd31-d919-4330-a3fb-a73e629d2d39 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.854875540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f8f2a12-8bd6-4883-ae5a-4e2bce5febab name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.854930820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f8f2a12-8bd6-4883-ae5a-4e2bce5febab name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.855189035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f8f2a12-8bd6-4883-ae5a-4e2bce5febab name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.899519521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8769c67-17ee-407a-abfb-a55f3d69c12c name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.899613191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8769c67-17ee-407a-abfb-a55f3d69c12c name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.900823238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27a03328-678b-4166-85e0-fc2dd4fdfb6f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.901572256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219100901519736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27a03328-678b-4166-85e0-fc2dd4fdfb6f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.902274741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d242dfb9-8ffc-4551-9c33-8707ec787083 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.902331423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d242dfb9-8ffc-4551-9c33-8707ec787083 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.902593673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d242dfb9-8ffc-4551-9c33-8707ec787083 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.944437240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fff0fae4-888a-4f37-9c7e-71403429895b name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.944509339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fff0fae4-888a-4f37-9c7e-71403429895b name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.946045358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a92b5b6-4e77-487b-8780-7b7d4c69a127 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.946440769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219100946419262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a92b5b6-4e77-487b-8780-7b7d4c69a127 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.947043771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40f8d13d-631c-459f-93f1-03fded58ed42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.947095565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40f8d13d-631c-459f-93f1-03fded58ed42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.947655565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40f8d13d-631c-459f-93f1-03fded58ed42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.990847632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ad239f7-7ed2-4e79-9319-2f6f483ec4cf name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.990989345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ad239f7-7ed2-4e79-9319-2f6f483ec4cf name=/runtime.v1.RuntimeService/Version
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.992376862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be94b23d-a756-485d-a284-c7bbaf07a65c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.993010542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219100992941946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be94b23d-a756-485d-a284-c7bbaf07a65c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.994550790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8ef4d1d-70bb-4dd3-a548-1d5c522b661f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.994633101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8ef4d1d-70bb-4dd3-a548-1d5c522b661f name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 13:45:00 pause-880805 crio[2349]: time="2025-03-17 13:45:00.994909816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742219079120353152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742219075284669384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742219075253306028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742219075265749982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742219075268502181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374,PodSandboxId:fafd1e94e95001cb2bd2e8c3299260bcdb014cc294be911af7604332119d91c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742219061485030481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc,PodSandboxId:9c7a532ac45a37aabb8560ab426ec1c810e117d4659349f4a360679ae82901cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1742219060636810404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2d84b66050c0312b166d06ad0951247,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234,PodSandboxId:3491c828f7b1d9e291ca98fff717cdb5f79cafc983721d6beead029bf9991500,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1742219060585693061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2bdc060f307a9090c1aa6b0520c197,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.ku
bernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb,PodSandboxId:5de94530a887457317874300547cbc75818434e029e786bc5a1291929f8ea0bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1742219060510612651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20aeb65e4e70616736efe266d0bd89c2,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523,PodSandboxId:09d43634e3c7889aad549a4df28cb1ec06b30adcb7c073389133b5517a90ec85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1742219060482777397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j6xzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 735bd65e-41e7-48bc-b9c2-c6fdda988310,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9,PodSandboxId:a55f033e05b5883438d6bd2f3edb40f7baaeac898761e8cb63aff4370f09c024,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1742219060498035916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-880805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b06f6c607f93299e404e8065aa6c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2,PodSandboxId:f543f844cad0307c22933653a3c0d8327a16f27d31224cc5aac21478204e3b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1742219017876831957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nttjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b185d851-a2d4-4a9f-a30b-26d34b39beeb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8ef4d1d-70bb-4dd3-a548-1d5c522b661f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0f4bba918c2ed       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   21 seconds ago       Running             kube-proxy                2                   09d43634e3c78       kube-proxy-j6xzf
	1accdee979041       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   25 seconds ago       Running             kube-scheduler            2                   a55f033e05b58       kube-scheduler-pause-880805
	f93948e4f8ff3       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   25 seconds ago       Running             kube-controller-manager   2                   3491c828f7b1d       kube-controller-manager-pause-880805
	a03eefae624c5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   25 seconds ago       Running             etcd                      2                   9c7a532ac45a3       etcd-pause-880805
	2914ee1335aa2       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   25 seconds ago       Running             kube-apiserver            2                   5de94530a8874       kube-apiserver-pause-880805
	55b0efa9e2456       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   39 seconds ago       Running             coredns                   1                   fafd1e94e9500       coredns-668d6bf9bc-nttjk
	48c1359285825       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   40 seconds ago       Exited              etcd                      1                   9c7a532ac45a3       etcd-pause-880805
	0a0da51f434a6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   40 seconds ago       Exited              kube-controller-manager   1                   3491c828f7b1d       kube-controller-manager-pause-880805
	bef8d4672be7b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   40 seconds ago       Exited              kube-apiserver            1                   5de94530a8874       kube-apiserver-pause-880805
	e21abd81d7508       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   40 seconds ago       Exited              kube-scheduler            1                   a55f033e05b58       kube-scheduler-pause-880805
	43efe3e98767e       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   40 seconds ago       Exited              kube-proxy                1                   09d43634e3c78       kube-proxy-j6xzf
	8116dadd952ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f543f844cad03       coredns-668d6bf9bc-nttjk
	
	
	==> coredns [55b0efa9e2456674ffdd911261ae20d5427752acea6606d860c649f492a46374] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[966311080]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.721) (total time: 10001ms):
	Trace[966311080]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.722)
	Trace[966311080]: [10.001013795s] [10.001013795s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1762825214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.724) (total time: 10000ms):
	Trace[1762825214]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.725)
	Trace[1762825214]: [10.000795801s] [10.000795801s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[640229930]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:44:21.725) (total time: 10000ms):
	Trace[640229930]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:44:31.725)
	Trace[640229930]: [10.000500401s] [10.000500401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [8116dadd952ca32fcbfe9156b11b310b4e093943107a8c7966e5bf116e3a61b2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52791 - 19689 "HINFO IN 8981724560520950910.7206283758638428393. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053815495s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-880805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-880805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=pause-880805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T13_43_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 13:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-880805
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:44:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:44:38 +0000   Mon, 17 Mar 2025 13:43:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    pause-880805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c431cdce110945adbf98cae894dca125
	  System UUID:                c431cdce-1109-45ad-bf98-cae894dca125
	  Boot ID:                    9bde7860-4726-428d-a194-809967bcd0e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-nttjk                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-pause-880805                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-880805             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-880805    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-j6xzf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-pause-880805             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 95s)  kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeReady                89s                kubelet          Node pause-880805 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           86s                node-controller  Node pause-880805 event: Registered Node pause-880805 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-880805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-880805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-880805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-880805 event: Registered Node pause-880805 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.941373] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.056121] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053744] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.191774] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.111552] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.243269] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +3.986973] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +5.364653] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.056661] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.505433] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.087221] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.273144] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.098100] kauditd_printk_skb: 18 callbacks suppressed
	[Mar17 13:44] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.084669] kauditd_printk_skb: 103 callbacks suppressed
	[  +0.073275] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.171305] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.144950] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.268208] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +1.504793] systemd-fstab-generator[2465]: Ignoring "noauto" option for root device
	[ +12.400095] kauditd_printk_skb: 197 callbacks suppressed
	[  +2.551811] systemd-fstab-generator[3298]: Ignoring "noauto" option for root device
	[  +4.535186] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.748351] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	
	
	==> etcd [48c1359285825c4611ea0fd02025f75636c99e39dbe3e43706cc2e49447714fc] <==
	{"level":"warn","ts":"2025-03-17T13:44:21.261703Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-03-17T13:44:21.262112Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.171:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.171:2380","--initial-cluster=pause-880805=https://192.168.39.171:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.171:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.171:2380","--name=pause-880805","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2025-03-17T13:44:21.269795Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-03-17T13:44:21.269843Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-03-17T13:44:21.269861Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2025-03-17T13:44:21.269915Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:44:21.272848Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"]}
	{"level":"info","ts":"2025-03-17T13:44:21.273462Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-880805","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clust
er-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2025-03-17T13:44:21.308860Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"24.537232ms"}
	{"level":"info","ts":"2025-03-17T13:44:21.331636Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-03-17T13:44:21.349830Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","commit-index":438}
	{"level":"info","ts":"2025-03-17T13:44:21.352229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=()"}
	{"level":"info","ts":"2025-03-17T13:44:21.354027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became follower at term 2"}
	{"level":"info","ts":"2025-03-17T13:44:21.354101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4e6b9cdcc1ed933f [peers: [], term: 2, commit: 438, applied: 0, lastindex: 438, lastterm: 2]"}
	{"level":"warn","ts":"2025-03-17T13:44:21.359329Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-03-17T13:44:21.421117Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":416}
	
	
	==> etcd [a03eefae624c51f84e0f7e8f0b2dfd25ad7791bdfcbb62cc6bcd2773b5419186] <==
	{"level":"info","ts":"2025-03-17T13:44:37.414207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:44:37.414238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T13:44:37.414317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:44:37.414890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:44:37.415558Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2025-03-17T13:44:37.418381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:44:37.419085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:44:48.986948Z","caller":"traceutil/trace.go:171","msg":"trace[436761010] linearizableReadLoop","detail":"{readStateIndex:530; appliedIndex:529; }","duration":"185.706103ms","start":"2025-03-17T13:44:48.801225Z","end":"2025-03-17T13:44:48.986931Z","steps":["trace[436761010] 'read index received'  (duration: 185.607147ms)","trace[436761010] 'applied index is now lower than readState.Index'  (duration: 98.451µs)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:44:48.987234Z","caller":"traceutil/trace.go:171","msg":"trace[729740229] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"212.726871ms","start":"2025-03-17T13:44:48.774487Z","end":"2025-03-17T13:44:48.987213Z","steps":["trace[729740229] 'process raft request'  (duration: 212.03331ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:48.988035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.798972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-880805\" limit:1 ","response":"range_response_count:1 size:5847"}
	{"level":"info","ts":"2025-03-17T13:44:48.988999Z","caller":"traceutil/trace.go:171","msg":"trace[1332939784] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-880805; range_end:; response_count:1; response_revision:487; }","duration":"187.774614ms","start":"2025-03-17T13:44:48.801173Z","end":"2025-03-17T13:44:48.988948Z","steps":["trace[1332939784] 'agreement among raft nodes before linearized reading'  (duration: 186.714602ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:49.393590Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.816947ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610363780214823200 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" mod_revision:420 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-03-17T13:44:49.393999Z","caller":"traceutil/trace.go:171","msg":"trace[1470306950] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"401.884895ms","start":"2025-03-17T13:44:48.992062Z","end":"2025-03-17T13:44:49.393946Z","steps":["trace[1470306950] 'read index received'  (duration: 113.223944ms)","trace[1470306950] 'applied index is now lower than readState.Index'  (duration: 288.659342ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.394333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.259079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-880805\" limit:1 ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2025-03-17T13:44:49.394441Z","caller":"traceutil/trace.go:171","msg":"trace[405305159] range","detail":"{range_begin:/registry/minions/pause-880805; range_end:; response_count:1; response_revision:488; }","duration":"402.394277ms","start":"2025-03-17T13:44:48.992038Z","end":"2025-03-17T13:44:49.394432Z","steps":["trace[405305159] 'agreement among raft nodes before linearized reading'  (duration: 402.204369ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:44:49.394593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T13:44:48.992024Z","time spent":"402.55052ms","remote":"127.0.0.1:40064","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5451,"request content":"key:\"/registry/minions/pause-880805\" limit:1 "}
	{"level":"info","ts":"2025-03-17T13:44:49.394451Z","caller":"traceutil/trace.go:171","msg":"trace[1027070799] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"574.051038ms","start":"2025-03-17T13:44:48.820380Z","end":"2025-03-17T13:44:49.394431Z","steps":["trace[1027070799] 'process raft request'  (duration: 285.006733ms)","trace[1027070799] 'compare'  (duration: 287.496908ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.394764Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T13:44:48.820358Z","time spent":"574.355321ms","remote":"127.0.0.1:40148","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" mod_revision:420 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-easqndy6ehmu3nfxnmrtxfecne\" > >"}
	{"level":"warn","ts":"2025-03-17T13:44:49.969166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.87184ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610363780214823208 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.171\" mod_revision:428 > success:<request_put:<key:\"/registry/masterleases/192.168.39.171\" value_size:67 lease:1386991743360047398 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.171\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-03-17T13:44:49.969324Z","caller":"traceutil/trace.go:171","msg":"trace[1588882162] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:532; }","duration":"167.908326ms","start":"2025-03-17T13:44:49.801401Z","end":"2025-03-17T13:44:49.969309Z","steps":["trace[1588882162] 'read index received'  (duration: 35.743009ms)","trace[1588882162] 'applied index is now lower than readState.Index'  (duration: 132.164397ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:44:49.969411Z","caller":"traceutil/trace.go:171","msg":"trace[918317538] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"193.138658ms","start":"2025-03-17T13:44:49.776252Z","end":"2025-03-17T13:44:49.969391Z","steps":["trace[918317538] 'process raft request'  (duration: 60.945004ms)","trace[918317538] 'compare'  (duration: 131.65165ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T13:44:49.969437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.054811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-880805\" limit:1 ","response":"range_response_count:1 size:5847"}
	{"level":"info","ts":"2025-03-17T13:44:49.969539Z","caller":"traceutil/trace.go:171","msg":"trace[1770803393] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-880805; range_end:; response_count:1; response_revision:489; }","duration":"168.186629ms","start":"2025-03-17T13:44:49.801344Z","end":"2025-03-17T13:44:49.969531Z","steps":["trace[1770803393] 'agreement among raft nodes before linearized reading'  (duration: 168.022777ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:44:51.338639Z","caller":"traceutil/trace.go:171","msg":"trace[631472093] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"250.406614ms","start":"2025-03-17T13:44:51.088215Z","end":"2025-03-17T13:44:51.338622Z","steps":["trace[631472093] 'process raft request'  (duration: 250.23266ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:44:51.647770Z","caller":"traceutil/trace.go:171","msg":"trace[1022612325] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"299.333946ms","start":"2025-03-17T13:44:51.348412Z","end":"2025-03-17T13:44:51.647746Z","steps":["trace[1022612325] 'process raft request'  (duration: 299.178871ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:45:01 up 2 min,  0 users,  load average: 0.74, 0.29, 0.10
	Linux pause-880805 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2914ee1335aa2a6be26c50a3a61c5dca91834112ccc8ff1759800af0c1ea123a] <==
	I0317 13:44:38.553122       1 aggregator.go:171] initial CRD sync complete...
	I0317 13:44:38.553152       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 13:44:38.553159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 13:44:38.553164       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:44:38.570028       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 13:44:38.595125       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 13:44:38.595155       1 policy_source.go:240] refreshing policies
	I0317 13:44:38.636467       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:44:38.638737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 13:44:38.639317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 13:44:38.639363       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:44:38.641154       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:44:38.640209       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 13:44:38.646650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:44:38.647946       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0317 13:44:38.654459       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0317 13:44:38.844728       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:44:39.447667       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 13:44:40.135558       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:44:40.176906       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 13:44:40.199915       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:44:40.205907       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:44:42.021685       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:44:42.069889       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:44:47.596883       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [bef8d4672be7bd9e9658af824e1d5261dc50ef8e6df9de88050372e7eb7caceb] <==
	W0317 13:44:20.959147       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0317 13:44:20.960176       1 options.go:238] external host was not specified, using 192.168.39.171
	I0317 13:44:20.970225       1 server.go:143] Version: v1.32.2
	I0317 13:44:20.970319       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0317 13:44:21.775129       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:21.776896       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0317 13:44:21.782626       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0317 13:44:21.799676       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 13:44:21.810070       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0317 13:44:21.810114       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0317 13:44:21.810439       1 instance.go:233] Using reconciler: lease
	W0317 13:44:21.811554       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.776327       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.777786       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:22.812576       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.242561       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.242662       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:24.441326       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.581906       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.676936       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:26.800227       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.573494       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.590311       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:44:30.771643       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0a0da51f434a65efcdc947b17a721424caf5716f097479c965bfdeb8ee964234] <==
	
	
	==> kube-controller-manager [f93948e4f8ff37d0005ad21daaa2c57e3e7d94f8f606e996a75013ff3731c84e] <==
	I0317 13:44:41.758413       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0317 13:44:41.760761       1 shared_informer.go:320] Caches are synced for crt configmap
	I0317 13:44:41.763105       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0317 13:44:41.764455       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 13:44:41.766734       1 shared_informer.go:320] Caches are synced for endpoint
	I0317 13:44:41.766806       1 shared_informer.go:320] Caches are synced for deployment
	I0317 13:44:41.766812       1 shared_informer.go:320] Caches are synced for disruption
	I0317 13:44:41.766886       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0317 13:44:41.767566       1 shared_informer.go:320] Caches are synced for job
	I0317 13:44:41.767927       1 shared_informer.go:320] Caches are synced for daemon sets
	I0317 13:44:41.768776       1 shared_informer.go:320] Caches are synced for attach detach
	I0317 13:44:41.769367       1 shared_informer.go:320] Caches are synced for stateful set
	I0317 13:44:41.771320       1 shared_informer.go:320] Caches are synced for TTL
	I0317 13:44:41.772636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0317 13:44:41.773846       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 13:44:41.775017       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0317 13:44:41.783275       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0317 13:44:41.786863       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0317 13:44:41.791135       1 shared_informer.go:320] Caches are synced for persistent volume
	I0317 13:44:41.791218       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:44:41.792475       1 shared_informer.go:320] Caches are synced for PV protection
	I0317 13:44:47.605102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="29.089822ms"
	I0317 13:44:47.605342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.251µs"
	I0317 13:44:47.627470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="21.729397ms"
	I0317 13:44:47.627856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="107.777µs"
	
	
	==> kube-proxy [0f4bba918c2ed691ed93db3bfa2a430b91e978f7fb04ccfcdf30233f10f4762b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:39.258901       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:44:39.266419       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0317 13:44:39.266611       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:44:39.295199       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:44:39.295235       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:44:39.295264       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:44:39.297423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:44:39.297688       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:44:39.297730       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:44:39.298923       1 config.go:199] "Starting service config controller"
	I0317 13:44:39.299027       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:44:39.299065       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:44:39.299083       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:44:39.299565       1 config.go:329] "Starting node config controller"
	I0317 13:44:39.299619       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:44:39.399887       1 shared_informer.go:320] Caches are synced for node config
	I0317 13:44:39.400061       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:44:39.400071       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523] <==
	I0317 13:44:21.824672       1 server_linux.go:66] "Using iptables proxy"
	E0317 13:44:21.857633       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:21.894314       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:44:32.740661       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-880805\": dial tcp 192.168.39.171:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.171:47880->192.168.39.171:8443: read: connection reset by peer"
	
	
	==> kube-scheduler [1accdee97904160e759bd5618a179ca1acacf6f6d81271b331d0920579f83991] <==
	I0317 13:44:36.642606       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:44:38.530418       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:44:38.530456       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 13:44:38.530466       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:44:38.530472       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:44:38.559247       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:44:38.559486       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:44:38.561894       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:44:38.562245       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:44:38.562334       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:44:38.562258       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:44:38.663118       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e21abd81d7508efe7bc3caf08a449314dcea64997ffa201b05a17f096917a0c9] <==
	I0317 13:44:22.217750       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:44:32.740808       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.171:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.171:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.171:47890->192.168.39.171:8443: read: connection reset by peer
	W0317 13:44:32.740838       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:44:32.740845       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:44:32.755084       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:44:32.755152       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0317 13:44:32.755181       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0317 13:44:32.757173       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:44:32.757247       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0317 13:44:32.757275       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0317 13:44:32.757573       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:44:32.757638       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:44:32.757796       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0317 13:44:32.757901       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0317 13:44:32.758276       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	I0317 13:44:32.758359       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0317 13:44:32.758460       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 17 13:44:36 pause-880805 kubelet[3305]: E0317 13:44:36.974262    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.974423    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.975260    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:37 pause-880805 kubelet[3305]: E0317 13:44:37.976131    3305 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-880805\" not found" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.523716    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.613849    3305 kubelet_node_status.go:125] "Node was previously registered" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.613939    3305 kubelet_node_status.go:79] "Successfully registered node" node="pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.614027    3305 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.615344    3305 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.660244    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-880805\" already exists" pod="kube-system/etcd-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.660289    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.670156    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-880805\" already exists" pod="kube-system/kube-apiserver-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.670199    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.678413    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-880805\" already exists" pod="kube-system/kube-controller-manager-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.678465    3305 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: E0317 13:44:38.685908    3305 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-880805\" already exists" pod="kube-system/kube-scheduler-pause-880805"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.805000    3305 apiserver.go:52] "Watching apiserver"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.823497    3305 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.840087    3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/735bd65e-41e7-48bc-b9c2-c6fdda988310-lib-modules\") pod \"kube-proxy-j6xzf\" (UID: \"735bd65e-41e7-48bc-b9c2-c6fdda988310\") " pod="kube-system/kube-proxy-j6xzf"
	Mar 17 13:44:38 pause-880805 kubelet[3305]: I0317 13:44:38.840262    3305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/735bd65e-41e7-48bc-b9c2-c6fdda988310-xtables-lock\") pod \"kube-proxy-j6xzf\" (UID: \"735bd65e-41e7-48bc-b9c2-c6fdda988310\") " pod="kube-system/kube-proxy-j6xzf"
	Mar 17 13:44:39 pause-880805 kubelet[3305]: I0317 13:44:39.109278    3305 scope.go:117] "RemoveContainer" containerID="43efe3e98767e5de29116924b96202af721d30316072ff482ad0eab5a6ae4523"
	Mar 17 13:44:44 pause-880805 kubelet[3305]: E0317 13:44:44.933524    3305 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219084932809517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:44 pause-880805 kubelet[3305]: E0317 13:44:44.933597    3305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219084932809517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:54 pause-880805 kubelet[3305]: E0317 13:44:54.936676    3305 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219094936188037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Mar 17 13:44:54 pause-880805 kubelet[3305]: E0317 13:44:54.936713    3305 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742219094936188037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-880805 -n pause-880805
helpers_test.go:261: (dbg) Run:  kubectl --context pause-880805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (77.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m56.403022913s)

                                                
                                                
-- stdout --
	* [old-k8s-version-803027] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-803027" primary control-plane node in "old-k8s-version-803027" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:45:03.356992  669182 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:45:03.357265  669182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:03.357276  669182 out.go:358] Setting ErrFile to fd 2...
	I0317 13:45:03.357280  669182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:45:03.357471  669182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:45:03.358046  669182 out.go:352] Setting JSON to false
	I0317 13:45:03.359103  669182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12447,"bootTime":1742206656,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:45:03.359166  669182 start.go:139] virtualization: kvm guest
	I0317 13:45:03.361271  669182 out.go:177] * [old-k8s-version-803027] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:45:03.362708  669182 notify.go:220] Checking for updates...
	I0317 13:45:03.362781  669182 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:45:03.365419  669182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:45:03.366873  669182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:45:03.368150  669182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:45:03.369506  669182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:45:03.370923  669182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:45:03.372749  669182 config.go:182] Loaded profile config "cert-expiration-355456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:03.372888  669182 config.go:182] Loaded profile config "cert-options-197082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:03.373014  669182 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:45:03.373152  669182 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:45:03.411109  669182 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:45:03.412539  669182 start.go:297] selected driver: kvm2
	I0317 13:45:03.412557  669182 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:45:03.412577  669182 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:45:03.413482  669182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:03.413609  669182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:45:03.429978  669182 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:45:03.430017  669182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:45:03.430261  669182 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:45:03.430304  669182 cni.go:84] Creating CNI manager for ""
	I0317 13:45:03.430355  669182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:45:03.430363  669182 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:45:03.430407  669182 start.go:340] cluster config:
	{Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:45:03.430517  669182 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:45:03.432348  669182 out.go:177] * Starting "old-k8s-version-803027" primary control-plane node in "old-k8s-version-803027" cluster
	I0317 13:45:03.433637  669182 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:45:03.433670  669182 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0317 13:45:03.433676  669182 cache.go:56] Caching tarball of preloaded images
	I0317 13:45:03.433771  669182 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:45:03.433783  669182 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0317 13:45:03.433875  669182 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json ...
	I0317 13:45:03.433894  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json: {Name:mk419cc2b8d21fbdf5252c0969e3d91f5b49cb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:45:03.434025  669182 start.go:360] acquireMachinesLock for old-k8s-version-803027: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:45:29.624331  669182 start.go:364] duration metric: took 26.190251875s to acquireMachinesLock for "old-k8s-version-803027"
	I0317 13:45:29.624428  669182 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:45:29.624563  669182 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:45:29.626704  669182 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 13:45:29.626956  669182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:45:29.627020  669182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:45:29.644466  669182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I0317 13:45:29.644924  669182 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:45:29.645450  669182 main.go:141] libmachine: Using API Version  1
	I0317 13:45:29.645477  669182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:45:29.645856  669182 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:45:29.646107  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:29.646330  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:29.646477  669182 start.go:159] libmachine.API.Create for "old-k8s-version-803027" (driver="kvm2")
	I0317 13:45:29.646513  669182 client.go:168] LocalClient.Create starting
	I0317 13:45:29.646553  669182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:45:29.646595  669182 main.go:141] libmachine: Decoding PEM data...
	I0317 13:45:29.646616  669182 main.go:141] libmachine: Parsing certificate...
	I0317 13:45:29.646684  669182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:45:29.646712  669182 main.go:141] libmachine: Decoding PEM data...
	I0317 13:45:29.646732  669182 main.go:141] libmachine: Parsing certificate...
	I0317 13:45:29.646760  669182 main.go:141] libmachine: Running pre-create checks...
	I0317 13:45:29.646774  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .PreCreateCheck
	I0317 13:45:29.647144  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:45:29.647617  669182 main.go:141] libmachine: Creating machine...
	I0317 13:45:29.647632  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .Create
	I0317 13:45:29.647785  669182 main.go:141] libmachine: (old-k8s-version-803027) creating KVM machine...
	I0317 13:45:29.647807  669182 main.go:141] libmachine: (old-k8s-version-803027) creating network...
	I0317 13:45:29.649030  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found existing default KVM network
	I0317 13:45:29.650600  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:29.650395  669606 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:b8:e8} reservation:<nil>}
	I0317 13:45:29.651257  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:29.651144  669606 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:03:54:88} reservation:<nil>}
	I0317 13:45:29.652299  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:29.652193  669606 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028ed70}
	I0317 13:45:29.652320  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | created network xml: 
	I0317 13:45:29.652338  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | <network>
	I0317 13:45:29.652352  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   <name>mk-old-k8s-version-803027</name>
	I0317 13:45:29.652366  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   <dns enable='no'/>
	I0317 13:45:29.652376  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   
	I0317 13:45:29.652388  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0317 13:45:29.652399  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |     <dhcp>
	I0317 13:45:29.652405  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0317 13:45:29.652415  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |     </dhcp>
	I0317 13:45:29.652441  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   </ip>
	I0317 13:45:29.652464  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG |   
	I0317 13:45:29.652516  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | </network>
	I0317 13:45:29.652552  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | 
	I0317 13:45:29.658023  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | trying to create private KVM network mk-old-k8s-version-803027 192.168.61.0/24...
	I0317 13:45:29.734287  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | private KVM network mk-old-k8s-version-803027 192.168.61.0/24 created
	I0317 13:45:29.734320  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:29.734229  669606 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:45:29.734356  669182 main.go:141] libmachine: (old-k8s-version-803027) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027 ...
	I0317 13:45:29.734387  669182 main.go:141] libmachine: (old-k8s-version-803027) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:45:29.734410  669182 main.go:141] libmachine: (old-k8s-version-803027) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:45:30.009243  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:30.009088  669606 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa...
	I0317 13:45:30.248726  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:30.248565  669606 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/old-k8s-version-803027.rawdisk...
	I0317 13:45:30.248765  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Writing magic tar header
	I0317 13:45:30.248783  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Writing SSH key tar header
	I0317 13:45:30.248796  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:30.248708  669606 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027 ...
	I0317 13:45:30.248813  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027
	I0317 13:45:30.248904  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027 (perms=drwx------)
	I0317 13:45:30.248930  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:45:30.248942  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:45:30.248960  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:45:30.248974  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:45:30.248992  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:45:30.249004  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home/jenkins
	I0317 13:45:30.249019  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:45:30.249035  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:45:30.249048  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:45:30.249063  669182 main.go:141] libmachine: (old-k8s-version-803027) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:45:30.249074  669182 main.go:141] libmachine: (old-k8s-version-803027) creating domain...
	I0317 13:45:30.249086  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | checking permissions on dir: /home
	I0317 13:45:30.249158  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | skipping /home - not owner
	I0317 13:45:30.250340  669182 main.go:141] libmachine: (old-k8s-version-803027) define libvirt domain using xml: 
	I0317 13:45:30.250360  669182 main.go:141] libmachine: (old-k8s-version-803027) <domain type='kvm'>
	I0317 13:45:30.250371  669182 main.go:141] libmachine: (old-k8s-version-803027)   <name>old-k8s-version-803027</name>
	I0317 13:45:30.250379  669182 main.go:141] libmachine: (old-k8s-version-803027)   <memory unit='MiB'>2200</memory>
	I0317 13:45:30.250388  669182 main.go:141] libmachine: (old-k8s-version-803027)   <vcpu>2</vcpu>
	I0317 13:45:30.250401  669182 main.go:141] libmachine: (old-k8s-version-803027)   <features>
	I0317 13:45:30.250415  669182 main.go:141] libmachine: (old-k8s-version-803027)     <acpi/>
	I0317 13:45:30.250422  669182 main.go:141] libmachine: (old-k8s-version-803027)     <apic/>
	I0317 13:45:30.250428  669182 main.go:141] libmachine: (old-k8s-version-803027)     <pae/>
	I0317 13:45:30.250433  669182 main.go:141] libmachine: (old-k8s-version-803027)     
	I0317 13:45:30.250438  669182 main.go:141] libmachine: (old-k8s-version-803027)   </features>
	I0317 13:45:30.250447  669182 main.go:141] libmachine: (old-k8s-version-803027)   <cpu mode='host-passthrough'>
	I0317 13:45:30.250454  669182 main.go:141] libmachine: (old-k8s-version-803027)   
	I0317 13:45:30.250459  669182 main.go:141] libmachine: (old-k8s-version-803027)   </cpu>
	I0317 13:45:30.250467  669182 main.go:141] libmachine: (old-k8s-version-803027)   <os>
	I0317 13:45:30.250478  669182 main.go:141] libmachine: (old-k8s-version-803027)     <type>hvm</type>
	I0317 13:45:30.250487  669182 main.go:141] libmachine: (old-k8s-version-803027)     <boot dev='cdrom'/>
	I0317 13:45:30.250508  669182 main.go:141] libmachine: (old-k8s-version-803027)     <boot dev='hd'/>
	I0317 13:45:30.250517  669182 main.go:141] libmachine: (old-k8s-version-803027)     <bootmenu enable='no'/>
	I0317 13:45:30.250526  669182 main.go:141] libmachine: (old-k8s-version-803027)   </os>
	I0317 13:45:30.250533  669182 main.go:141] libmachine: (old-k8s-version-803027)   <devices>
	I0317 13:45:30.250537  669182 main.go:141] libmachine: (old-k8s-version-803027)     <disk type='file' device='cdrom'>
	I0317 13:45:30.250561  669182 main.go:141] libmachine: (old-k8s-version-803027)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/boot2docker.iso'/>
	I0317 13:45:30.250580  669182 main.go:141] libmachine: (old-k8s-version-803027)       <target dev='hdc' bus='scsi'/>
	I0317 13:45:30.250590  669182 main.go:141] libmachine: (old-k8s-version-803027)       <readonly/>
	I0317 13:45:30.250600  669182 main.go:141] libmachine: (old-k8s-version-803027)     </disk>
	I0317 13:45:30.250609  669182 main.go:141] libmachine: (old-k8s-version-803027)     <disk type='file' device='disk'>
	I0317 13:45:30.250622  669182 main.go:141] libmachine: (old-k8s-version-803027)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:45:30.250641  669182 main.go:141] libmachine: (old-k8s-version-803027)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/old-k8s-version-803027.rawdisk'/>
	I0317 13:45:30.250649  669182 main.go:141] libmachine: (old-k8s-version-803027)       <target dev='hda' bus='virtio'/>
	I0317 13:45:30.250679  669182 main.go:141] libmachine: (old-k8s-version-803027)     </disk>
	I0317 13:45:30.250695  669182 main.go:141] libmachine: (old-k8s-version-803027)     <interface type='network'>
	I0317 13:45:30.250703  669182 main.go:141] libmachine: (old-k8s-version-803027)       <source network='mk-old-k8s-version-803027'/>
	I0317 13:45:30.250707  669182 main.go:141] libmachine: (old-k8s-version-803027)       <model type='virtio'/>
	I0317 13:45:30.250714  669182 main.go:141] libmachine: (old-k8s-version-803027)     </interface>
	I0317 13:45:30.250719  669182 main.go:141] libmachine: (old-k8s-version-803027)     <interface type='network'>
	I0317 13:45:30.250725  669182 main.go:141] libmachine: (old-k8s-version-803027)       <source network='default'/>
	I0317 13:45:30.250729  669182 main.go:141] libmachine: (old-k8s-version-803027)       <model type='virtio'/>
	I0317 13:45:30.250735  669182 main.go:141] libmachine: (old-k8s-version-803027)     </interface>
	I0317 13:45:30.250743  669182 main.go:141] libmachine: (old-k8s-version-803027)     <serial type='pty'>
	I0317 13:45:30.250749  669182 main.go:141] libmachine: (old-k8s-version-803027)       <target port='0'/>
	I0317 13:45:30.250758  669182 main.go:141] libmachine: (old-k8s-version-803027)     </serial>
	I0317 13:45:30.250764  669182 main.go:141] libmachine: (old-k8s-version-803027)     <console type='pty'>
	I0317 13:45:30.250774  669182 main.go:141] libmachine: (old-k8s-version-803027)       <target type='serial' port='0'/>
	I0317 13:45:30.250779  669182 main.go:141] libmachine: (old-k8s-version-803027)     </console>
	I0317 13:45:30.250784  669182 main.go:141] libmachine: (old-k8s-version-803027)     <rng model='virtio'>
	I0317 13:45:30.250790  669182 main.go:141] libmachine: (old-k8s-version-803027)       <backend model='random'>/dev/random</backend>
	I0317 13:45:30.250796  669182 main.go:141] libmachine: (old-k8s-version-803027)     </rng>
	I0317 13:45:30.250801  669182 main.go:141] libmachine: (old-k8s-version-803027)     
	I0317 13:45:30.250805  669182 main.go:141] libmachine: (old-k8s-version-803027)     
	I0317 13:45:30.250809  669182 main.go:141] libmachine: (old-k8s-version-803027)   </devices>
	I0317 13:45:30.250816  669182 main.go:141] libmachine: (old-k8s-version-803027) </domain>
	I0317 13:45:30.250823  669182 main.go:141] libmachine: (old-k8s-version-803027) 
	I0317 13:45:30.255405  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:4d:db:0c in network default
	I0317 13:45:30.256157  669182 main.go:141] libmachine: (old-k8s-version-803027) starting domain...
	I0317 13:45:30.256184  669182 main.go:141] libmachine: (old-k8s-version-803027) ensuring networks are active...
	I0317 13:45:30.256201  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:30.256957  669182 main.go:141] libmachine: (old-k8s-version-803027) Ensuring network default is active
	I0317 13:45:30.257264  669182 main.go:141] libmachine: (old-k8s-version-803027) Ensuring network mk-old-k8s-version-803027 is active
	I0317 13:45:30.257871  669182 main.go:141] libmachine: (old-k8s-version-803027) getting domain XML...
	I0317 13:45:30.258741  669182 main.go:141] libmachine: (old-k8s-version-803027) creating domain...
	I0317 13:45:31.689653  669182 main.go:141] libmachine: (old-k8s-version-803027) waiting for IP...
	I0317 13:45:31.690680  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:31.691208  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:31.691250  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:31.691210  669606 retry.go:31] will retry after 304.250683ms: waiting for domain to come up
	I0317 13:45:31.996878  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:31.997573  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:31.997595  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:31.997471  669606 retry.go:31] will retry after 255.016207ms: waiting for domain to come up
	I0317 13:45:32.254181  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:32.254643  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:32.254679  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:32.254619  669606 retry.go:31] will retry after 452.135407ms: waiting for domain to come up
	I0317 13:45:32.708474  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:32.709032  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:32.709068  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:32.709004  669606 retry.go:31] will retry after 398.406512ms: waiting for domain to come up
	I0317 13:45:33.109648  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:33.110152  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:33.110184  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:33.110128  669606 retry.go:31] will retry after 602.499505ms: waiting for domain to come up
	I0317 13:45:33.714177  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:33.714769  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:33.714802  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:33.714720  669606 retry.go:31] will retry after 891.272458ms: waiting for domain to come up
	I0317 13:45:34.607433  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:34.607927  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:34.607952  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:34.607893  669606 retry.go:31] will retry after 781.532858ms: waiting for domain to come up
	I0317 13:45:35.390729  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:35.391230  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:35.391257  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:35.391190  669606 retry.go:31] will retry after 964.003899ms: waiting for domain to come up
	I0317 13:45:36.356481  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:36.356992  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:36.357020  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:36.356957  669606 retry.go:31] will retry after 1.853968592s: waiting for domain to come up
	I0317 13:45:38.212806  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:38.213353  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:38.213383  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:38.213312  669606 retry.go:31] will retry after 2.301371052s: waiting for domain to come up
	I0317 13:45:40.516895  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:40.517864  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:40.517894  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:40.517730  669606 retry.go:31] will retry after 1.991655337s: waiting for domain to come up
	I0317 13:45:42.510704  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:42.511352  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:42.511462  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:42.511341  669606 retry.go:31] will retry after 2.384098953s: waiting for domain to come up
	I0317 13:45:44.896710  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:44.897178  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:44.897232  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:44.897177  669606 retry.go:31] will retry after 2.760546642s: waiting for domain to come up
	I0317 13:45:47.660414  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:47.660970  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:45:47.661016  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:45:47.660945  669606 retry.go:31] will retry after 3.503451132s: waiting for domain to come up
	I0317 13:45:51.167057  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.167496  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has current primary IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.167524  669182 main.go:141] libmachine: (old-k8s-version-803027) found domain IP: 192.168.61.229
	I0317 13:45:51.167552  669182 main.go:141] libmachine: (old-k8s-version-803027) reserving static IP address...
	I0317 13:45:51.167850  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-803027", mac: "52:54:00:c7:07:e9", ip: "192.168.61.229"} in network mk-old-k8s-version-803027
	I0317 13:45:51.243022  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Getting to WaitForSSH function...
	I0317 13:45:51.243060  669182 main.go:141] libmachine: (old-k8s-version-803027) reserved static IP address 192.168.61.229 for domain old-k8s-version-803027
	I0317 13:45:51.243073  669182 main.go:141] libmachine: (old-k8s-version-803027) waiting for SSH...
	I0317 13:45:51.245500  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:51.245879  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027
	I0317 13:45:51.245907  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find defined IP address of network mk-old-k8s-version-803027 interface with MAC address 52:54:00:c7:07:e9
	I0317 13:45:51.246041  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH client type: external
	I0317 13:45:51.246070  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa (-rw-------)
	I0317 13:45:51.246105  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:45:51.246144  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | About to run SSH command:
	I0317 13:45:51.246162  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | exit 0
	I0317 13:45:51.249914  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | SSH cmd err, output: exit status 255: 
	I0317 13:45:51.249945  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0317 13:45:51.249976  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | command : exit 0
	I0317 13:45:51.249995  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | err     : exit status 255
	I0317 13:45:51.250027  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | output  : 
	I0317 13:45:54.250106  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Getting to WaitForSSH function...
	I0317 13:45:54.252386  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.252741  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.252766  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.252960  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH client type: external
	I0317 13:45:54.252982  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa (-rw-------)
	I0317 13:45:54.253004  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:45:54.253015  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | About to run SSH command:
	I0317 13:45:54.253038  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | exit 0
	I0317 13:45:54.375409  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | SSH cmd err, output: <nil>: 
	I0317 13:45:54.375718  669182 main.go:141] libmachine: (old-k8s-version-803027) KVM machine creation complete
	I0317 13:45:54.376028  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:45:54.376621  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:54.376839  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:54.377008  669182 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:45:54.377020  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetState
	I0317 13:45:54.378269  669182 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:45:54.378281  669182 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:45:54.378291  669182 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:45:54.378301  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.380591  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.380959  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.380981  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.381135  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.381316  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.381479  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.381623  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.381788  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.382014  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.382026  669182 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:45:54.478534  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:45:54.478569  669182 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:45:54.478586  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.481535  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.481803  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.481828  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.481974  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.482139  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.482365  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.482555  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.482721  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.482920  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.482957  669182 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:45:54.583789  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:45:54.583877  669182 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:45:54.583884  669182 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:45:54.583898  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.584188  669182 buildroot.go:166] provisioning hostname "old-k8s-version-803027"
	I0317 13:45:54.584220  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.584422  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.586680  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.587143  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.587180  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.587367  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.587557  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.587735  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.587903  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.588106  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.588333  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.588345  669182 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-803027 && echo "old-k8s-version-803027" | sudo tee /etc/hostname
	I0317 13:45:54.700090  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-803027
	
	I0317 13:45:54.700122  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.702728  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.703104  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.703134  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.703311  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.703545  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.703679  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.703837  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.703976  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:54.704214  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:54.704237  669182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-803027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-803027/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-803027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:45:54.811761  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:45:54.811794  669182 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:45:54.811837  669182 buildroot.go:174] setting up certificates
	I0317 13:45:54.811850  669182 provision.go:84] configureAuth start
	I0317 13:45:54.811864  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:45:54.812197  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:54.814656  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.815055  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.815087  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.815247  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.817348  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.817647  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.817673  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.817786  669182 provision.go:143] copyHostCerts
	I0317 13:45:54.817856  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:45:54.817874  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:45:54.817944  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:45:54.818074  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:45:54.818085  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:45:54.818109  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:45:54.818184  669182 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:45:54.818192  669182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:45:54.818210  669182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:45:54.818266  669182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-803027 san=[127.0.0.1 192.168.61.229 localhost minikube old-k8s-version-803027]
	I0317 13:45:54.889126  669182 provision.go:177] copyRemoteCerts
	I0317 13:45:54.889186  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:45:54.889221  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:54.891953  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.892232  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:54.892264  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:54.892474  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:54.892699  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:54.892887  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:54.893025  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:54.973309  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:45:54.996085  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0317 13:45:55.018040  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:45:55.040573  669182 provision.go:87] duration metric: took 228.70201ms to configureAuth
	I0317 13:45:55.040619  669182 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:45:55.040824  669182 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:45:55.040917  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.043628  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.043972  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.044002  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.044188  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.044433  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.044632  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.044791  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.044972  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.045171  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:55.045187  669182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:45:55.255485  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:45:55.255516  669182 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:45:55.255526  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetURL
	I0317 13:45:55.256899  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | using libvirt version 6000000
	I0317 13:45:55.258951  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.259226  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.259255  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.259416  669182 main.go:141] libmachine: Docker is up and running!
	I0317 13:45:55.259435  669182 main.go:141] libmachine: Reticulating splines...
	I0317 13:45:55.259443  669182 client.go:171] duration metric: took 25.612919221s to LocalClient.Create
	I0317 13:45:55.259477  669182 start.go:167] duration metric: took 25.612991301s to libmachine.API.Create "old-k8s-version-803027"
	I0317 13:45:55.259495  669182 start.go:293] postStartSetup for "old-k8s-version-803027" (driver="kvm2")
	I0317 13:45:55.259508  669182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:45:55.259557  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.259777  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:45:55.259809  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.261641  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.261966  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.261990  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.262117  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.262271  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.262461  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.262622  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.341414  669182 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:45:55.345240  669182 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:45:55.345266  669182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:45:55.345329  669182 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:45:55.345398  669182 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:45:55.345551  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:45:55.354548  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:45:55.376343  669182 start.go:296] duration metric: took 116.829514ms for postStartSetup
	I0317 13:45:55.376407  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:45:55.377042  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:55.379611  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.379943  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.379964  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.380341  669182 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json ...
	I0317 13:45:55.380546  669182 start.go:128] duration metric: took 25.755967347s to createHost
	I0317 13:45:55.380570  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.382921  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.383231  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.383262  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.383453  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.383646  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.383814  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.383933  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.384090  669182 main.go:141] libmachine: Using SSH client type: native
	I0317 13:45:55.384328  669182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:45:55.384340  669182 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:45:55.484029  669182 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219155.461524795
	
	I0317 13:45:55.484057  669182 fix.go:216] guest clock: 1742219155.461524795
	I0317 13:45:55.484068  669182 fix.go:229] Guest: 2025-03-17 13:45:55.461524795 +0000 UTC Remote: 2025-03-17 13:45:55.380556744 +0000 UTC m=+52.061335954 (delta=80.968051ms)
	I0317 13:45:55.484095  669182 fix.go:200] guest clock delta is within tolerance: 80.968051ms
	I0317 13:45:55.484100  669182 start.go:83] releasing machines lock for "old-k8s-version-803027", held for 25.859714629s
	I0317 13:45:55.484136  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.484453  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:55.487346  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.487796  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.487834  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.488025  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488573  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488782  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:45:55.488845  669182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:45:55.488902  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.489040  669182 ssh_runner.go:195] Run: cat /version.json
	I0317 13:45:55.489068  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:45:55.491717  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492024  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492096  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.492121  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492291  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.492441  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.492486  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:55.492513  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:55.492636  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.492721  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:45:55.492780  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.492860  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:45:55.492992  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:45:55.493143  669182 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:45:55.569642  669182 ssh_runner.go:195] Run: systemctl --version
	I0317 13:45:55.590175  669182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:45:55.747427  669182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:45:55.755146  669182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:45:55.755212  669182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:45:55.771027  669182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:45:55.771052  669182 start.go:495] detecting cgroup driver to use...
	I0317 13:45:55.771121  669182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:45:55.787897  669182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:45:55.800319  669182 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:45:55.800381  669182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:45:55.813337  669182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:45:55.825564  669182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:45:55.936983  669182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:45:56.101386  669182 docker.go:233] disabling docker service ...
	I0317 13:45:56.101467  669182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:45:56.118628  669182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:45:56.132886  669182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:45:56.269977  669182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:45:56.395983  669182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:45:56.409048  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:45:56.426036  669182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0317 13:45:56.426119  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.436248  669182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:45:56.436311  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.445986  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.456227  669182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:45:56.465798  669182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:45:56.475830  669182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:45:56.484444  669182 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:45:56.484524  669182 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:45:56.496073  669182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:45:56.504879  669182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:45:56.625040  669182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:45:56.713631  669182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:45:56.713716  669182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:45:56.718009  669182 start.go:563] Will wait 60s for crictl version
	I0317 13:45:56.718073  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:45:56.721492  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:45:56.754725  669182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:45:56.754802  669182 ssh_runner.go:195] Run: crio --version
	I0317 13:45:56.780089  669182 ssh_runner.go:195] Run: crio --version
	I0317 13:45:56.809423  669182 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0317 13:45:56.810779  669182 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:45:56.813403  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:56.813706  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:45:44 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:45:56.813742  669182 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:45:56.813952  669182 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0317 13:45:56.818027  669182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:45:56.830111  669182 kubeadm.go:883] updating cluster {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:45:56.830229  669182 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:45:56.830283  669182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:45:56.862558  669182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:45:56.862640  669182 ssh_runner.go:195] Run: which lz4
	I0317 13:45:56.866543  669182 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:45:56.870714  669182 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:45:56.870754  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0317 13:45:58.202222  669182 crio.go:462] duration metric: took 1.335699881s to copy over tarball
	I0317 13:45:58.202304  669182 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:46:00.612590  669182 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.410253696s)
	I0317 13:46:00.612628  669182 crio.go:469] duration metric: took 2.410371799s to extract the tarball
	I0317 13:46:00.612638  669182 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:46:00.654075  669182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:46:00.698256  669182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:46:00.698287  669182 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 13:46:00.698342  669182 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:00.698357  669182 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.698420  669182 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0317 13:46:00.698433  669182 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.698448  669182 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.698456  669182 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.698441  669182 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.698418  669182 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.699748  669182 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.699810  669182 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.699829  669182 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.699753  669182 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.699974  669182 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.699982  669182 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0317 13:46:00.699996  669182 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:00.699999  669182 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.847096  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.850522  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.853558  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.853734  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.871165  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.888266  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0317 13:46:00.894171  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:00.941720  669182 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0317 13:46:00.941785  669182 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:00.941840  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.950196  669182 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0317 13:46:00.950259  669182 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:00.950316  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.967887  669182 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0317 13:46:00.967950  669182 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0317 13:46:00.967986  669182 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0317 13:46:00.968006  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.968025  669182 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:00.968078  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:00.997764  669182 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0317 13:46:00.997815  669182 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:00.997872  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.012339  669182 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0317 13:46:01.012391  669182 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0317 13:46:01.012439  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.013578  669182 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0317 13:46:01.013603  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.013619  669182 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.013651  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.013654  669182 ssh_runner.go:195] Run: which crictl
	I0317 13:46:01.013722  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.013657  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.013730  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.021526  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.105636  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.130992  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.131047  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.131083  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.131047  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.131131  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.145563  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.213442  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:46:01.290581  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.290690  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:46:01.290713  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:46:01.290782  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:46:01.290796  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:46:01.290852  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:46:01.317298  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0317 13:46:01.413782  669182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:46:01.421371  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0317 13:46:01.421396  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0317 13:46:01.426911  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0317 13:46:01.433554  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0317 13:46:01.433591  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0317 13:46:01.459824  669182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0317 13:46:02.410062  669182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:46:02.557273  669182 cache_images.go:92] duration metric: took 1.858965497s to LoadCachedImages
	W0317 13:46:02.557370  669182 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0317 13:46:02.557387  669182 kubeadm.go:934] updating node { 192.168.61.229 8443 v1.20.0 crio true true} ...
	I0317 13:46:02.557499  669182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-803027 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:46:02.557577  669182 ssh_runner.go:195] Run: crio config
	I0317 13:46:02.608529  669182 cni.go:84] Creating CNI manager for ""
	I0317 13:46:02.608557  669182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:46:02.608568  669182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:46:02.608586  669182 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-803027 NodeName:old-k8s-version-803027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0317 13:46:02.608699  669182 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-803027"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:46:02.608760  669182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0317 13:46:02.618979  669182 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:46:02.619041  669182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:46:02.628619  669182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0317 13:46:02.644430  669182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:46:02.663061  669182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0317 13:46:02.682741  669182 ssh_runner.go:195] Run: grep 192.168.61.229	control-plane.minikube.internal$ /etc/hosts
	I0317 13:46:02.688095  669182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:46:02.706123  669182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:46:02.860760  669182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:46:02.877957  669182 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027 for IP: 192.168.61.229
	I0317 13:46:02.877991  669182 certs.go:194] generating shared ca certs ...
	I0317 13:46:02.878015  669182 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.878212  669182 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:46:02.878276  669182 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:46:02.878290  669182 certs.go:256] generating profile certs ...
	I0317 13:46:02.878371  669182 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key
	I0317 13:46:02.878411  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt with IP's: []
	I0317 13:46:02.943760  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt ...
	I0317 13:46:02.943802  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.crt: {Name:mk65b93f15885e6dbfc5fe81f4825ede29af84ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.944020  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key ...
	I0317 13:46:02.944044  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key: {Name:mk8054b97714f5519489dbabc3adec69734611eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:02.944179  669182 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3
	I0317 13:46:02.944208  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.229]
	I0317 13:46:03.417250  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 ...
	I0317 13:46:03.417287  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3: {Name:mk571d80349a8579bd389bfe3a89f496b4f4b4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.456815  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3 ...
	I0317 13:46:03.456863  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3: {Name:mk92ffa0b12a6ea74b4fe2acb8062b7b3ddfb45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.457031  669182 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt.729f1cc3 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt
	I0317 13:46:03.457146  669182 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key
	I0317 13:46:03.457235  669182 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key
	I0317 13:46:03.457259  669182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt with IP's: []
	I0317 13:46:03.535835  669182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt ...
	I0317 13:46:03.535865  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt: {Name:mk9430ddd69712bd5f3dd62ef4266a5b3bbca50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.591292  669182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key ...
	I0317 13:46:03.591343  669182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key: {Name:mkf70d4c286306fd785f19ecee89372f3d7ee79c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:46:03.591635  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:46:03.591687  669182 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:46:03.591702  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:46:03.591733  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:46:03.591764  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:46:03.591801  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:46:03.591854  669182 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:46:03.592506  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:46:03.621945  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:46:03.646073  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:46:03.671158  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:46:03.695266  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:46:03.722011  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:46:03.753431  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:46:03.785230  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:46:03.820225  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:46:03.854067  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:46:03.878551  669182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:46:03.905438  669182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:46:03.924750  669182 ssh_runner.go:195] Run: openssl version
	I0317 13:46:03.930579  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:46:03.941686  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.946192  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.946259  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:46:03.952065  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:46:03.962713  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:46:03.973492  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.979228  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.979299  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:46:03.984833  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:46:03.995609  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:46:04.006779  669182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.011688  669182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.011757  669182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:46:04.019347  669182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:46:04.030095  669182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:46:04.035429  669182 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:46:04.035484  669182 kubeadm.go:392] StartCluster: {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:46:04.035615  669182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:46:04.035675  669182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:46:04.080232  669182 cri.go:89] found id: ""
	I0317 13:46:04.080306  669182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:46:04.093474  669182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:46:04.103662  669182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:46:04.113196  669182 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:46:04.113216  669182 kubeadm.go:157] found existing configuration files:
	
	I0317 13:46:04.113265  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:46:04.122656  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:46:04.122730  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:46:04.132492  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:46:04.141395  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:46:04.141476  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:46:04.150576  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:46:04.159202  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:46:04.159285  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:46:04.169461  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:46:04.178572  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:46:04.178638  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:46:04.188061  669182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:46:04.290394  669182 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:46:04.290484  669182 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:46:04.434696  669182 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:46:04.434914  669182 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:46:04.435058  669182 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:46:04.612260  669182 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:46:04.756445  669182 out.go:235]   - Generating certificates and keys ...
	I0317 13:46:04.756610  669182 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:46:04.756697  669182 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:46:04.756790  669182 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:46:04.856688  669182 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:46:05.165268  669182 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:46:05.310401  669182 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:46:05.439918  669182 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:46:05.440268  669182 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0317 13:46:05.571769  669182 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:46:05.572164  669182 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	I0317 13:46:05.701956  669182 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:46:05.960547  669182 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:46:06.073883  669182 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:46:06.074331  669182 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:46:06.182666  669182 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:46:06.338391  669182 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:46:06.961087  669182 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:46:07.087463  669182 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:46:07.107754  669182 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:46:07.111210  669182 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:46:07.111311  669182 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:46:07.295582  669182 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:46:07.297308  669182 out.go:235]   - Booting up control plane ...
	I0317 13:46:07.297457  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:46:07.303807  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:46:07.304846  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:46:07.305666  669182 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:46:07.310548  669182 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:46:47.305622  669182 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 13:46:47.305797  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:46:47.306075  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:46:52.306393  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:46:52.306711  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:47:02.305398  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:47:02.305696  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:47:22.304527  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:47:22.304842  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:48:02.305672  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:48:02.305911  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:48:02.305962  669182 kubeadm.go:310] 
	I0317 13:48:02.306050  669182 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 13:48:02.306105  669182 kubeadm.go:310] 		timed out waiting for the condition
	I0317 13:48:02.306113  669182 kubeadm.go:310] 
	I0317 13:48:02.306162  669182 kubeadm.go:310] 	This error is likely caused by:
	I0317 13:48:02.306201  669182 kubeadm.go:310] 		- The kubelet is not running
	I0317 13:48:02.306359  669182 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 13:48:02.306375  669182 kubeadm.go:310] 
	I0317 13:48:02.306507  669182 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 13:48:02.306566  669182 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 13:48:02.306612  669182 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 13:48:02.306621  669182 kubeadm.go:310] 
	I0317 13:48:02.306761  669182 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 13:48:02.306875  669182 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 13:48:02.306888  669182 kubeadm.go:310] 
	I0317 13:48:02.306996  669182 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 13:48:02.307069  669182 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 13:48:02.307160  669182 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 13:48:02.307264  669182 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 13:48:02.307275  669182 kubeadm.go:310] 
	I0317 13:48:02.307576  669182 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:48:02.307680  669182 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 13:48:02.307755  669182 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0317 13:48:02.307930  669182 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-803027] and IPs [192.168.61.229 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0317 13:48:02.307984  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0317 13:48:02.747455  669182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:48:02.766089  669182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:48:02.777862  669182 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:48:02.777887  669182 kubeadm.go:157] found existing configuration files:
	
	I0317 13:48:02.777944  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:48:02.786774  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:48:02.786863  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:48:02.795989  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:48:02.805138  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:48:02.805207  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:48:02.814650  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:48:02.823683  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:48:02.823754  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:48:02.833388  669182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:48:02.842233  669182 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:48:02.842294  669182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:48:02.851613  669182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:48:02.919594  669182 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:48:02.919719  669182 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:48:03.050970  669182 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:48:03.051227  669182 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:48:03.051371  669182 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:48:03.221100  669182 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:48:03.223572  669182 out.go:235]   - Generating certificates and keys ...
	I0317 13:48:03.223699  669182 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:48:03.223793  669182 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:48:03.223907  669182 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 13:48:03.224001  669182 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 13:48:03.224095  669182 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 13:48:03.224168  669182 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 13:48:03.224274  669182 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 13:48:03.224373  669182 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 13:48:03.226328  669182 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 13:48:03.226439  669182 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 13:48:03.226507  669182 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 13:48:03.226589  669182 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:48:03.416830  669182 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:48:03.484877  669182 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:48:03.794807  669182 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:48:03.926938  669182 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:48:03.945226  669182 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:48:03.946174  669182 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:48:03.946246  669182 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:48:04.067091  669182 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:48:04.068817  669182 out.go:235]   - Booting up control plane ...
	I0317 13:48:04.068917  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:48:04.072636  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:48:04.076793  669182 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:48:04.077813  669182 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:48:04.080646  669182 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:48:44.082941  669182 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 13:48:44.083036  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:48:44.083284  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:48:49.083668  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:48:49.083868  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:48:59.084270  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:48:59.084572  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:49:19.083898  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:49:19.084106  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:49:59.084793  669182 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:49:59.085277  669182 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:49:59.085312  669182 kubeadm.go:310] 
	I0317 13:49:59.085423  669182 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 13:49:59.085511  669182 kubeadm.go:310] 		timed out waiting for the condition
	I0317 13:49:59.085521  669182 kubeadm.go:310] 
	I0317 13:49:59.085606  669182 kubeadm.go:310] 	This error is likely caused by:
	I0317 13:49:59.085675  669182 kubeadm.go:310] 		- The kubelet is not running
	I0317 13:49:59.085947  669182 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 13:49:59.086046  669182 kubeadm.go:310] 
	I0317 13:49:59.086309  669182 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 13:49:59.086383  669182 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 13:49:59.086454  669182 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 13:49:59.086464  669182 kubeadm.go:310] 
	I0317 13:49:59.086704  669182 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 13:49:59.086912  669182 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 13:49:59.086930  669182 kubeadm.go:310] 
	I0317 13:49:59.087171  669182 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 13:49:59.087360  669182 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 13:49:59.087636  669182 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 13:49:59.087820  669182 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 13:49:59.087856  669182 kubeadm.go:310] 
	I0317 13:49:59.088220  669182 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:49:59.088476  669182 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 13:49:59.088828  669182 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 13:49:59.089115  669182 kubeadm.go:394] duration metric: took 3m55.053633268s to StartCluster
	I0317 13:49:59.089160  669182 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:49:59.089216  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:49:59.122889  669182 cri.go:89] found id: ""
	I0317 13:49:59.122924  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.122936  669182 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:49:59.122944  669182 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:49:59.123009  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:49:59.157873  669182 cri.go:89] found id: ""
	I0317 13:49:59.157907  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.157919  669182 logs.go:284] No container was found matching "etcd"
	I0317 13:49:59.157926  669182 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:49:59.158000  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:49:59.194138  669182 cri.go:89] found id: ""
	I0317 13:49:59.194173  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.194185  669182 logs.go:284] No container was found matching "coredns"
	I0317 13:49:59.194193  669182 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:49:59.194294  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:49:59.229517  669182 cri.go:89] found id: ""
	I0317 13:49:59.229550  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.229568  669182 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:49:59.229577  669182 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:49:59.229643  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:49:59.269038  669182 cri.go:89] found id: ""
	I0317 13:49:59.269071  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.269083  669182 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:49:59.269092  669182 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:49:59.269161  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:49:59.317516  669182 cri.go:89] found id: ""
	I0317 13:49:59.317556  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.317575  669182 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:49:59.317584  669182 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:49:59.317658  669182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:49:59.361780  669182 cri.go:89] found id: ""
	I0317 13:49:59.361817  669182 logs.go:282] 0 containers: []
	W0317 13:49:59.361830  669182 logs.go:284] No container was found matching "kindnet"
	I0317 13:49:59.361844  669182 logs.go:123] Gathering logs for kubelet ...
	I0317 13:49:59.361861  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:49:59.420511  669182 logs.go:123] Gathering logs for dmesg ...
	I0317 13:49:59.420559  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:49:59.433436  669182 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:49:59.433469  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:49:59.543386  669182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:49:59.543412  669182 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:49:59.543426  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:49:59.657057  669182 logs.go:123] Gathering logs for container status ...
	I0317 13:49:59.657094  669182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0317 13:49:59.704099  669182 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 13:49:59.704169  669182 out.go:270] * 
	* 
	W0317 13:49:59.704224  669182 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 13:49:59.704284  669182 out.go:270] * 
	* 
	W0317 13:49:59.705175  669182 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 13:49:59.708724  669182 out.go:201] 
	W0317 13:49:59.709885  669182 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 13:49:59.709941  669182 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 13:49:59.709971  669182 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 13:49:59.711350  669182 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 6 (237.975503ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:49:59.999203  672748 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-803027" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-803027" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-803027 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-803027 create -f testdata/busybox.yaml: exit status 1 (46.821904ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-803027" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-803027 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 6 (225.136713ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:50:00.274336  672788 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-803027" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-803027" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 6 (224.796267ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:50:00.496716  672818 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-803027" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-803027" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-803027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-803027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.818644699s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-803027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-803027 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-803027 describe deploy/metrics-server -n kube-system: exit status 1 (55.864477ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-803027" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-803027 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 6 (265.752505ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:51:38.638799  673514 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-803027" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-803027" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0317 13:52:12.573723  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:53:44.452625  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m25.439780799s)

                                                
                                                
-- stdout --
	* [old-k8s-version-803027] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-803027" primary control-plane node in "old-k8s-version-803027" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-803027" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:51:43.276646  673643 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:51:43.276905  673643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:51:43.276916  673643 out.go:358] Setting ErrFile to fd 2...
	I0317 13:51:43.276920  673643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:51:43.277119  673643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:51:43.277670  673643 out.go:352] Setting JSON to false
	I0317 13:51:43.278712  673643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12847,"bootTime":1742206656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:51:43.278816  673643 start.go:139] virtualization: kvm guest
	I0317 13:51:43.280849  673643 out.go:177] * [old-k8s-version-803027] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:51:43.282452  673643 notify.go:220] Checking for updates...
	I0317 13:51:43.282463  673643 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:51:43.283727  673643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:51:43.285058  673643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:51:43.286282  673643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:51:43.287578  673643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:51:43.288749  673643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:51:43.290439  673643 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:51:43.290902  673643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:51:43.290977  673643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:51:43.306757  673643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0317 13:51:43.307225  673643 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:51:43.307757  673643 main.go:141] libmachine: Using API Version  1
	I0317 13:51:43.307803  673643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:51:43.308166  673643 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:51:43.308355  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:51:43.309989  673643 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0317 13:51:43.311193  673643 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:51:43.311523  673643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:51:43.311580  673643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:51:43.327208  673643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0317 13:51:43.327655  673643 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:51:43.328093  673643 main.go:141] libmachine: Using API Version  1
	I0317 13:51:43.328124  673643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:51:43.328492  673643 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:51:43.328697  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:51:43.367381  673643 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 13:51:43.368722  673643 start.go:297] selected driver: kvm2
	I0317 13:51:43.368743  673643 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:51:43.368888  673643 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:51:43.370030  673643 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:51:43.370144  673643 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:51:43.387392  673643 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:51:43.387968  673643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:51:43.388009  673643 cni.go:84] Creating CNI manager for ""
	I0317 13:51:43.388061  673643 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:51:43.388100  673643 start.go:340] cluster config:
	{Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:51:43.388201  673643 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:51:43.389819  673643 out.go:177] * Starting "old-k8s-version-803027" primary control-plane node in "old-k8s-version-803027" cluster
	I0317 13:51:43.391039  673643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:51:43.391107  673643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0317 13:51:43.391123  673643 cache.go:56] Caching tarball of preloaded images
	I0317 13:51:43.391256  673643 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:51:43.391271  673643 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0317 13:51:43.391419  673643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json ...
	I0317 13:51:43.391676  673643 start.go:360] acquireMachinesLock for old-k8s-version-803027: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:51:43.391745  673643 start.go:364] duration metric: took 44.09µs to acquireMachinesLock for "old-k8s-version-803027"
	I0317 13:51:43.391769  673643 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:51:43.391782  673643 fix.go:54] fixHost starting: 
	I0317 13:51:43.392112  673643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:51:43.392184  673643 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:51:43.406852  673643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0317 13:51:43.407241  673643 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:51:43.407780  673643 main.go:141] libmachine: Using API Version  1
	I0317 13:51:43.407805  673643 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:51:43.408124  673643 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:51:43.408320  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:51:43.408488  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetState
	I0317 13:51:43.410015  673643 fix.go:112] recreateIfNeeded on old-k8s-version-803027: state=Stopped err=<nil>
	I0317 13:51:43.410042  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	W0317 13:51:43.410192  673643 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:51:43.412256  673643 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-803027" ...
	I0317 13:51:43.413583  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .Start
	I0317 13:51:43.413766  673643 main.go:141] libmachine: (old-k8s-version-803027) starting domain...
	I0317 13:51:43.413789  673643 main.go:141] libmachine: (old-k8s-version-803027) ensuring networks are active...
	I0317 13:51:43.414497  673643 main.go:141] libmachine: (old-k8s-version-803027) Ensuring network default is active
	I0317 13:51:43.414862  673643 main.go:141] libmachine: (old-k8s-version-803027) Ensuring network mk-old-k8s-version-803027 is active
	I0317 13:51:43.415199  673643 main.go:141] libmachine: (old-k8s-version-803027) getting domain XML...
	I0317 13:51:43.415915  673643 main.go:141] libmachine: (old-k8s-version-803027) creating domain...
	I0317 13:51:44.670745  673643 main.go:141] libmachine: (old-k8s-version-803027) waiting for IP...
	I0317 13:51:44.671731  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:44.672184  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:44.672261  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:44.672181  673679 retry.go:31] will retry after 307.463815ms: waiting for domain to come up
	I0317 13:51:44.981932  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:44.982554  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:44.982571  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:44.982529  673679 retry.go:31] will retry after 276.670802ms: waiting for domain to come up
	I0317 13:51:45.261053  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:45.261564  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:45.261589  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:45.261535  673679 retry.go:31] will retry after 411.734893ms: waiting for domain to come up
	I0317 13:51:45.675292  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:45.675992  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:45.676019  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:45.675951  673679 retry.go:31] will retry after 426.95366ms: waiting for domain to come up
	I0317 13:51:46.104708  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:46.105198  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:46.105271  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:46.105165  673679 retry.go:31] will retry after 610.271655ms: waiting for domain to come up
	I0317 13:51:46.716701  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:46.717297  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:46.717335  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:46.717283  673679 retry.go:31] will retry after 689.634858ms: waiting for domain to come up
	I0317 13:51:47.408194  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:47.408722  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:47.408798  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:47.408711  673679 retry.go:31] will retry after 808.788667ms: waiting for domain to come up
	I0317 13:51:48.219087  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:48.219681  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:48.219751  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:48.219664  673679 retry.go:31] will retry after 1.114496632s: waiting for domain to come up
	I0317 13:51:49.335906  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:49.336414  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:49.336442  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:49.336375  673679 retry.go:31] will retry after 1.545021126s: waiting for domain to come up
	I0317 13:51:50.882851  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:50.883248  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:50.883270  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:50.883227  673679 retry.go:31] will retry after 1.626823304s: waiting for domain to come up
	I0317 13:51:52.512019  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:52.512649  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:52.512685  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:52.512586  673679 retry.go:31] will retry after 2.421374739s: waiting for domain to come up
	I0317 13:51:54.935081  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:54.935588  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:54.935624  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:54.935575  673679 retry.go:31] will retry after 2.567662154s: waiting for domain to come up
	I0317 13:51:57.506169  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:51:57.506599  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | unable to find current IP address of domain old-k8s-version-803027 in network mk-old-k8s-version-803027
	I0317 13:51:57.506640  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | I0317 13:51:57.506564  673679 retry.go:31] will retry after 2.831843867s: waiting for domain to come up
	I0317 13:52:00.340603  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.341163  673643 main.go:141] libmachine: (old-k8s-version-803027) found domain IP: 192.168.61.229
	I0317 13:52:00.341181  673643 main.go:141] libmachine: (old-k8s-version-803027) reserving static IP address...
	I0317 13:52:00.341205  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has current primary IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.341683  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "old-k8s-version-803027", mac: "52:54:00:c7:07:e9", ip: "192.168.61.229"} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.341717  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | skip adding static IP to network mk-old-k8s-version-803027 - found existing host DHCP lease matching {name: "old-k8s-version-803027", mac: "52:54:00:c7:07:e9", ip: "192.168.61.229"}
	I0317 13:52:00.341732  673643 main.go:141] libmachine: (old-k8s-version-803027) reserved static IP address 192.168.61.229 for domain old-k8s-version-803027
	I0317 13:52:00.341762  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | Getting to WaitForSSH function...
	I0317 13:52:00.341791  673643 main.go:141] libmachine: (old-k8s-version-803027) waiting for SSH...
	I0317 13:52:00.344111  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.344435  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.344477  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.344623  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH client type: external
	I0317 13:52:00.344647  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa (-rw-------)
	I0317 13:52:00.344688  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:52:00.344705  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | About to run SSH command:
	I0317 13:52:00.344721  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | exit 0
	I0317 13:52:00.471603  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | SSH cmd err, output: <nil>: 
	I0317 13:52:00.472047  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetConfigRaw
	I0317 13:52:00.472718  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:52:00.475466  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.475855  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.475887  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.476179  673643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/config.json ...
	I0317 13:52:00.476452  673643 machine.go:93] provisionDockerMachine start ...
	I0317 13:52:00.476479  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:00.476721  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:00.479095  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.479471  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.479488  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.479609  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:00.479803  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.479972  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.480109  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:00.480260  673643 main.go:141] libmachine: Using SSH client type: native
	I0317 13:52:00.480590  673643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:52:00.480606  673643 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:52:00.583627  673643 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 13:52:00.583670  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:52:00.583989  673643 buildroot.go:166] provisioning hostname "old-k8s-version-803027"
	I0317 13:52:00.584020  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:52:00.584219  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:00.586980  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.587369  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.587397  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.587614  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:00.587786  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.587964  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.588148  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:00.588332  673643 main.go:141] libmachine: Using SSH client type: native
	I0317 13:52:00.588546  673643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:52:00.588566  673643 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-803027 && echo "old-k8s-version-803027" | sudo tee /etc/hostname
	I0317 13:52:00.701107  673643 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-803027
	
	I0317 13:52:00.701144  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:00.704459  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.704812  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.704846  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.705004  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:00.705188  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.705353  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:00.705507  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:00.705680  673643 main.go:141] libmachine: Using SSH client type: native
	I0317 13:52:00.705896  673643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:52:00.705918  673643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-803027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-803027/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-803027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:52:00.820089  673643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:52:00.820143  673643 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:52:00.820168  673643 buildroot.go:174] setting up certificates
	I0317 13:52:00.820177  673643 provision.go:84] configureAuth start
	I0317 13:52:00.820188  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetMachineName
	I0317 13:52:00.820522  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:52:00.823213  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.823566  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.823595  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.823756  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:00.826212  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.826628  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:00.826661  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:00.826812  673643 provision.go:143] copyHostCerts
	I0317 13:52:00.826877  673643 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:52:00.826901  673643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:52:00.826982  673643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:52:00.827096  673643 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:52:00.827111  673643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:52:00.827149  673643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:52:00.827225  673643 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:52:00.827237  673643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:52:00.827270  673643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:52:00.827345  673643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-803027 san=[127.0.0.1 192.168.61.229 localhost minikube old-k8s-version-803027]
	I0317 13:52:01.086895  673643 provision.go:177] copyRemoteCerts
	I0317 13:52:01.086952  673643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:52:01.086983  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.089361  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.089667  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.089696  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.089948  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.090184  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.090373  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.090516  673643 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:52:01.169902  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:52:01.193122  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0317 13:52:01.218789  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:52:01.242840  673643 provision.go:87] duration metric: took 422.645348ms to configureAuth
	I0317 13:52:01.242874  673643 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:52:01.243051  673643 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:52:01.243138  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.245883  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.246245  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.246268  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.246388  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.246618  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.246786  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.246976  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.247193  673643 main.go:141] libmachine: Using SSH client type: native
	I0317 13:52:01.247502  673643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:52:01.247549  673643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:52:01.481481  673643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:52:01.481514  673643 machine.go:96] duration metric: took 1.005044035s to provisionDockerMachine
	I0317 13:52:01.481533  673643 start.go:293] postStartSetup for "old-k8s-version-803027" (driver="kvm2")
	I0317 13:52:01.481547  673643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:52:01.481580  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:01.481937  673643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:52:01.481969  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.484891  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.485229  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.485261  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.485425  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.485621  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.485745  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.485876  673643 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:52:01.571781  673643 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:52:01.575481  673643 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:52:01.575509  673643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:52:01.575589  673643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:52:01.575660  673643 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:52:01.575748  673643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:52:01.584684  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:52:01.610365  673643 start.go:296] duration metric: took 128.815778ms for postStartSetup
	I0317 13:52:01.610408  673643 fix.go:56] duration metric: took 18.218630675s for fixHost
	I0317 13:52:01.610445  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.613094  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.613401  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.613424  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.613694  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.613922  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.614094  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.614248  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.614415  673643 main.go:141] libmachine: Using SSH client type: native
	I0317 13:52:01.614624  673643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.229 22 <nil> <nil>}
	I0317 13:52:01.614635  673643 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:52:01.711730  673643 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219521.685159844
	
	I0317 13:52:01.711756  673643 fix.go:216] guest clock: 1742219521.685159844
	I0317 13:52:01.711765  673643 fix.go:229] Guest: 2025-03-17 13:52:01.685159844 +0000 UTC Remote: 2025-03-17 13:52:01.610411828 +0000 UTC m=+18.372525409 (delta=74.748016ms)
	I0317 13:52:01.711792  673643 fix.go:200] guest clock delta is within tolerance: 74.748016ms
	I0317 13:52:01.711798  673643 start.go:83] releasing machines lock for "old-k8s-version-803027", held for 18.320039428s
	I0317 13:52:01.711824  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:01.712121  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:52:01.714793  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.715192  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.715221  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.715376  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:01.716042  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:01.716255  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .DriverName
	I0317 13:52:01.716343  673643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:52:01.716395  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.716449  673643 ssh_runner.go:195] Run: cat /version.json
	I0317 13:52:01.716478  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHHostname
	I0317 13:52:01.719434  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.719843  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.719873  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.719895  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.720107  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.720274  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.720379  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:01.720415  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:01.720483  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.720638  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHPort
	I0317 13:52:01.720645  673643 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:52:01.720820  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHKeyPath
	I0317 13:52:01.720940  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetSSHUsername
	I0317 13:52:01.721076  673643 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/old-k8s-version-803027/id_rsa Username:docker}
	I0317 13:52:01.791975  673643 ssh_runner.go:195] Run: systemctl --version
	I0317 13:52:01.815027  673643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:52:01.957178  673643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:52:01.962413  673643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:52:01.962487  673643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:52:01.977533  673643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:52:01.977561  673643 start.go:495] detecting cgroup driver to use...
	I0317 13:52:01.977636  673643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:52:01.992736  673643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:52:02.006991  673643 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:52:02.007069  673643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:52:02.020246  673643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:52:02.032812  673643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:52:02.148720  673643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:52:02.308795  673643 docker.go:233] disabling docker service ...
	I0317 13:52:02.308873  673643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:52:02.324430  673643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:52:02.337962  673643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:52:02.468804  673643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:52:02.592806  673643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:52:02.608396  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:52:02.625967  673643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0317 13:52:02.626041  673643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:52:02.637251  673643 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:52:02.637324  673643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:52:02.647126  673643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:52:02.657249  673643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:52:02.667159  673643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:52:02.677575  673643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:52:02.686766  673643 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:52:02.686834  673643 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:52:02.699450  673643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:52:02.710002  673643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:52:02.843902  673643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:52:02.941295  673643 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:52:02.941371  673643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:52:02.946166  673643 start.go:563] Will wait 60s for crictl version
	I0317 13:52:02.946217  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:02.949674  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:52:02.991330  673643 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:52:02.991412  673643 ssh_runner.go:195] Run: crio --version
	I0317 13:52:03.018088  673643 ssh_runner.go:195] Run: crio --version
	I0317 13:52:03.049352  673643 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0317 13:52:03.050648  673643 main.go:141] libmachine: (old-k8s-version-803027) Calling .GetIP
	I0317 13:52:03.053696  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:03.054059  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:07:e9", ip: ""} in network mk-old-k8s-version-803027: {Iface:virbr3 ExpiryTime:2025-03-17 14:51:54 +0000 UTC Type:0 Mac:52:54:00:c7:07:e9 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:old-k8s-version-803027 Clientid:01:52:54:00:c7:07:e9}
	I0317 13:52:03.054092  673643 main.go:141] libmachine: (old-k8s-version-803027) DBG | domain old-k8s-version-803027 has defined IP address 192.168.61.229 and MAC address 52:54:00:c7:07:e9 in network mk-old-k8s-version-803027
	I0317 13:52:03.054404  673643 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0317 13:52:03.058379  673643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:52:03.070452  673643 kubeadm.go:883] updating cluster {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:52:03.070559  673643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 13:52:03.070616  673643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:52:03.116087  673643 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:52:03.116172  673643 ssh_runner.go:195] Run: which lz4
	I0317 13:52:03.120523  673643 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:52:03.124525  673643 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:52:03.124564  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0317 13:52:04.573611  673643 crio.go:462] duration metric: took 1.453111711s to copy over tarball
	I0317 13:52:04.573690  673643 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:52:07.418661  673643 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.84493715s)
	I0317 13:52:07.418691  673643 crio.go:469] duration metric: took 2.845049085s to extract the tarball
	I0317 13:52:07.418699  673643 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:52:07.460309  673643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:52:07.496657  673643 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0317 13:52:07.496682  673643 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0317 13:52:07.496771  673643 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:52:07.496790  673643 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:07.496806  673643 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.496838  673643 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.496843  673643 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.496895  673643 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0317 13:52:07.496917  673643 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.496936  673643 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:07.498154  673643 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:52:07.498156  673643 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:07.498181  673643 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.630000  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.639092  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.640810  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.644937  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:07.649590  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.676807  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0317 13:52:07.686278  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:07.713666  673643 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0317 13:52:07.713726  673643 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.713774  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.760310  673643 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0317 13:52:07.760368  673643 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.760425  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.771635  673643 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0317 13:52:07.771681  673643 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.771728  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.791509  673643 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0317 13:52:07.791587  673643 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:07.791595  673643 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0317 13:52:07.791638  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.791652  673643 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.791698  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.807222  673643 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0317 13:52:07.807269  673643 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0317 13:52:07.807322  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.813797  673643 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0317 13:52:07.813825  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.813846  673643 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:07.813879  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.813922  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.813882  673643 ssh_runner.go:195] Run: which crictl
	I0317 13:52:07.813973  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:07.813983  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:52:07.813982  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.933121  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:52:07.933156  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:07.933180  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:07.933258  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:07.933262  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:07.933315  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:52:07.933346  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:08.056982  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0317 13:52:08.066892  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0317 13:52:08.066900  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0317 13:52:08.092497  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:08.092595  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0317 13:52:08.092617  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0317 13:52:08.092669  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0317 13:52:08.157911  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0317 13:52:08.182576  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0317 13:52:08.185031  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0317 13:52:08.210070  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0317 13:52:08.210135  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0317 13:52:08.210140  673643 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0317 13:52:08.227639  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0317 13:52:08.248013  673643 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0317 13:52:09.310793  673643 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:52:09.451886  673643 cache_images.go:92] duration metric: took 1.95516958s to LoadCachedImages
	W0317 13:52:09.451996  673643 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20539-621978/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0317 13:52:09.452014  673643 kubeadm.go:934] updating node { 192.168.61.229 8443 v1.20.0 crio true true} ...
	I0317 13:52:09.452126  673643 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-803027 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:52:09.452212  673643 ssh_runner.go:195] Run: crio config
	I0317 13:52:09.505156  673643 cni.go:84] Creating CNI manager for ""
	I0317 13:52:09.505185  673643 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 13:52:09.505200  673643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:52:09.505221  673643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.229 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-803027 NodeName:old-k8s-version-803027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0317 13:52:09.505408  673643 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-803027"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:52:09.505504  673643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0317 13:52:09.515393  673643 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:52:09.515465  673643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:52:09.524897  673643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0317 13:52:09.542498  673643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:52:09.559349  673643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0317 13:52:09.577799  673643 ssh_runner.go:195] Run: grep 192.168.61.229	control-plane.minikube.internal$ /etc/hosts
	I0317 13:52:09.581779  673643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:52:09.593654  673643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:52:09.719098  673643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:52:09.738895  673643 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027 for IP: 192.168.61.229
	I0317 13:52:09.738923  673643 certs.go:194] generating shared ca certs ...
	I0317 13:52:09.738946  673643 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:52:09.739125  673643 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:52:09.739167  673643 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:52:09.739182  673643 certs.go:256] generating profile certs ...
	I0317 13:52:09.829566  673643 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/client.key
	I0317 13:52:09.829696  673643 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key.729f1cc3
	I0317 13:52:09.829751  673643 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key
	I0317 13:52:09.829887  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:52:09.829930  673643 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:52:09.829947  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:52:09.829978  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:52:09.830006  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:52:09.830039  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:52:09.830092  673643 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:52:09.831002  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:52:09.868500  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:52:09.910561  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:52:09.944789  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:52:09.968461  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:52:09.992379  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:52:10.015838  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:52:10.038681  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/old-k8s-version-803027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:52:10.062117  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:52:10.085249  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:52:10.108265  673643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:52:10.131442  673643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:52:10.149113  673643 ssh_runner.go:195] Run: openssl version
	I0317 13:52:10.154508  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:52:10.165014  673643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:52:10.169263  673643 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:52:10.169320  673643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:52:10.174595  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:52:10.184525  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:52:10.194147  673643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:52:10.198088  673643 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:52:10.198142  673643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:52:10.203357  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:52:10.213809  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:52:10.224453  673643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:52:10.228526  673643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:52:10.228588  673643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:52:10.233987  673643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:52:10.244524  673643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:52:10.248713  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:52:10.254269  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:52:10.259956  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:52:10.265486  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:52:10.271262  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:52:10.276689  673643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:52:10.282043  673643 kubeadm.go:392] StartCluster: {Name:old-k8s-version-803027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-803027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:52:10.282121  673643 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:52:10.282155  673643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:52:10.320221  673643 cri.go:89] found id: ""
	I0317 13:52:10.320315  673643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:52:10.330380  673643 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:52:10.330402  673643 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:52:10.330451  673643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:52:10.339660  673643 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:52:10.340516  673643 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-803027" does not appear in /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:52:10.341064  673643 kubeconfig.go:62] /home/jenkins/minikube-integration/20539-621978/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-803027" cluster setting kubeconfig missing "old-k8s-version-803027" context setting]
	I0317 13:52:10.342049  673643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:52:10.343873  673643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:52:10.356664  673643 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.229
	I0317 13:52:10.356695  673643 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:52:10.356707  673643 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0317 13:52:10.356753  673643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:52:10.391765  673643 cri.go:89] found id: ""
	I0317 13:52:10.391831  673643 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:52:10.410338  673643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:52:10.420316  673643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:52:10.420337  673643 kubeadm.go:157] found existing configuration files:
	
	I0317 13:52:10.420378  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:52:10.430629  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:52:10.430688  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:52:10.439683  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:52:10.449199  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:52:10.449257  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:52:10.458161  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:52:10.466585  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:52:10.466634  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:52:10.475386  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:52:10.484087  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:52:10.484135  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:52:10.493798  673643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:52:10.503683  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:52:10.621682  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:52:11.872189  673643 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.250464645s)
	I0317 13:52:11.872255  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:52:12.087127  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:52:12.172391  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:52:12.257459  673643 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:52:12.257565  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:12.758055  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:13.258570  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:13.757682  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:14.257902  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:14.758322  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:15.257659  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:15.758683  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:16.257800  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:16.757800  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:17.257887  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:17.757800  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:18.258699  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:18.757920  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:19.258284  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:19.757726  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:20.258327  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:20.758358  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:21.257749  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:21.758668  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:22.257759  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:22.758414  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:23.258413  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:23.757810  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:24.257854  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:24.758440  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:25.258580  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:25.757708  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:26.257813  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:26.757715  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:27.257868  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:27.758306  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:28.257677  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:28.758300  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:29.258346  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:29.758265  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:30.257692  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:30.757646  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:31.258547  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:31.757928  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:32.257715  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:32.757808  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:33.257819  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:33.757951  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:34.258104  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:34.757873  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:35.258332  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:35.758270  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:36.258331  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:36.758066  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:37.257830  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:37.757606  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:38.257873  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:38.757890  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:39.257889  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:39.758572  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:40.257861  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:40.758347  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:41.257955  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:41.758401  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:42.258622  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:42.757986  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:43.258429  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:43.757815  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:44.258378  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:44.758586  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:45.257734  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:45.758271  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:46.258258  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:46.757699  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:47.258237  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:47.758621  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:48.257752  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:48.757935  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:49.258464  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:49.757713  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:50.257936  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:50.758235  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:51.258610  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:51.758275  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:52.258073  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:52.758467  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:53.258019  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:53.757692  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:54.258382  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:54.758014  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:55.258109  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:55.758488  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:56.258078  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:56.758614  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:57.257847  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:57.758216  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:58.257838  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:58.757663  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:59.258455  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:52:59.758296  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:00.258500  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:00.758398  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:01.257666  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:01.757831  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:02.257777  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:02.758616  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:03.258696  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:03.758280  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:04.258197  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:04.757987  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:05.258397  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:05.758084  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:06.258453  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:06.757833  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:07.258089  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:07.757851  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:08.258072  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:08.758673  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:09.258269  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:09.758128  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:10.258058  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:10.758606  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:11.258021  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:11.757719  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:12.257920  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:12.258022  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:12.292659  673643 cri.go:89] found id: ""
	I0317 13:53:12.292695  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.292728  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:12.292737  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:12.292800  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:12.325426  673643 cri.go:89] found id: ""
	I0317 13:53:12.325458  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.325468  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:12.325474  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:12.325528  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:12.356971  673643 cri.go:89] found id: ""
	I0317 13:53:12.357000  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.357013  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:12.357019  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:12.357070  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:12.387665  673643 cri.go:89] found id: ""
	I0317 13:53:12.387693  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.387701  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:12.387707  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:12.387763  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:12.418046  673643 cri.go:89] found id: ""
	I0317 13:53:12.418073  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.418082  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:12.418090  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:12.418150  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:12.452911  673643 cri.go:89] found id: ""
	I0317 13:53:12.452937  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.452946  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:12.452952  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:12.453002  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:12.489311  673643 cri.go:89] found id: ""
	I0317 13:53:12.489352  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.489365  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:12.489372  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:12.489449  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:12.521334  673643 cri.go:89] found id: ""
	I0317 13:53:12.521365  673643 logs.go:282] 0 containers: []
	W0317 13:53:12.521377  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:12.521389  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:12.521403  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:12.596819  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:12.596875  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:12.637140  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:12.637171  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:12.695245  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:12.695302  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:12.710016  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:12.710045  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:12.831830  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:15.332759  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:15.345225  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:15.345299  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:15.379803  673643 cri.go:89] found id: ""
	I0317 13:53:15.379833  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.379843  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:15.379851  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:15.379911  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:15.412675  673643 cri.go:89] found id: ""
	I0317 13:53:15.412707  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.412717  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:15.412724  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:15.412794  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:15.448182  673643 cri.go:89] found id: ""
	I0317 13:53:15.448218  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.448230  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:15.448238  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:15.448314  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:15.479685  673643 cri.go:89] found id: ""
	I0317 13:53:15.479718  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.479726  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:15.479732  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:15.479785  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:15.511479  673643 cri.go:89] found id: ""
	I0317 13:53:15.511512  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.511544  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:15.511552  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:15.511611  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:15.543753  673643 cri.go:89] found id: ""
	I0317 13:53:15.543786  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.543798  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:15.543807  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:15.543873  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:15.574626  673643 cri.go:89] found id: ""
	I0317 13:53:15.574653  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.574662  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:15.574667  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:15.574717  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:15.610397  673643 cri.go:89] found id: ""
	I0317 13:53:15.610424  673643 logs.go:282] 0 containers: []
	W0317 13:53:15.610433  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:15.610442  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:15.610467  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:15.693307  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:15.693349  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:15.742714  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:15.742754  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:15.810073  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:15.810122  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:15.828134  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:15.828172  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:15.900695  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:18.401400  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:18.416558  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:18.416636  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:18.451076  673643 cri.go:89] found id: ""
	I0317 13:53:18.451106  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.451118  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:18.451126  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:18.451215  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:18.483121  673643 cri.go:89] found id: ""
	I0317 13:53:18.483158  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.483170  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:18.483177  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:18.483269  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:18.514613  673643 cri.go:89] found id: ""
	I0317 13:53:18.514650  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.514662  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:18.514670  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:18.514732  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:18.544562  673643 cri.go:89] found id: ""
	I0317 13:53:18.544590  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.544598  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:18.544604  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:18.544655  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:18.576532  673643 cri.go:89] found id: ""
	I0317 13:53:18.576559  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.576567  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:18.576573  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:18.576622  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:18.607817  673643 cri.go:89] found id: ""
	I0317 13:53:18.607853  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.607861  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:18.607868  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:18.607919  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:18.640373  673643 cri.go:89] found id: ""
	I0317 13:53:18.640403  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.640414  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:18.640422  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:18.640492  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:18.670060  673643 cri.go:89] found id: ""
	I0317 13:53:18.670091  673643 logs.go:282] 0 containers: []
	W0317 13:53:18.670104  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:18.670117  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:18.670131  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:18.682276  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:18.682309  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:18.746916  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:18.746941  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:18.746956  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:18.827747  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:18.827785  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:18.864469  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:18.864499  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:21.418439  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:21.431876  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:21.431948  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:21.467308  673643 cri.go:89] found id: ""
	I0317 13:53:21.467344  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.467354  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:21.467361  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:21.467424  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:21.503600  673643 cri.go:89] found id: ""
	I0317 13:53:21.503633  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.503645  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:21.503652  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:21.503719  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:21.548256  673643 cri.go:89] found id: ""
	I0317 13:53:21.548291  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.548301  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:21.548307  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:21.548372  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:21.581662  673643 cri.go:89] found id: ""
	I0317 13:53:21.581692  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.581702  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:21.581710  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:21.581767  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:21.635314  673643 cri.go:89] found id: ""
	I0317 13:53:21.635341  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.635350  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:21.635357  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:21.635425  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:21.668375  673643 cri.go:89] found id: ""
	I0317 13:53:21.668401  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.668409  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:21.668416  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:21.668468  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:21.698356  673643 cri.go:89] found id: ""
	I0317 13:53:21.698391  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.698402  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:21.698410  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:21.698486  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:21.735785  673643 cri.go:89] found id: ""
	I0317 13:53:21.735817  673643 logs.go:282] 0 containers: []
	W0317 13:53:21.735828  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:21.735839  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:21.735855  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:21.749266  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:21.749296  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:21.818683  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:21.818712  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:21.818727  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:21.899244  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:21.899287  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:21.936968  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:21.937003  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:24.485696  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:24.498115  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:24.498208  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:24.530302  673643 cri.go:89] found id: ""
	I0317 13:53:24.530334  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.530347  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:24.530355  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:24.530414  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:24.560832  673643 cri.go:89] found id: ""
	I0317 13:53:24.560865  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.560877  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:24.560893  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:24.560956  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:24.597340  673643 cri.go:89] found id: ""
	I0317 13:53:24.597379  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.597390  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:24.597397  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:24.597450  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:24.630822  673643 cri.go:89] found id: ""
	I0317 13:53:24.630858  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.630871  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:24.630880  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:24.630949  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:24.662889  673643 cri.go:89] found id: ""
	I0317 13:53:24.662919  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.662928  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:24.662935  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:24.662987  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:24.696918  673643 cri.go:89] found id: ""
	I0317 13:53:24.696952  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.696964  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:24.696972  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:24.697038  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:24.730851  673643 cri.go:89] found id: ""
	I0317 13:53:24.730881  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.730889  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:24.730895  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:24.730949  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:24.764952  673643 cri.go:89] found id: ""
	I0317 13:53:24.764987  673643 logs.go:282] 0 containers: []
	W0317 13:53:24.765000  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:24.765011  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:24.765026  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:24.815172  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:24.815222  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:24.830132  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:24.830205  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:24.893899  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:24.893920  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:24.893932  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:24.972474  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:24.972516  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:27.511971  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:27.524199  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:27.524265  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:27.560990  673643 cri.go:89] found id: ""
	I0317 13:53:27.561026  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.561034  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:27.561041  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:27.561095  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:27.590405  673643 cri.go:89] found id: ""
	I0317 13:53:27.590434  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.590442  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:27.590448  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:27.590503  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:27.620425  673643 cri.go:89] found id: ""
	I0317 13:53:27.620462  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.620475  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:27.620484  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:27.620563  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:27.651240  673643 cri.go:89] found id: ""
	I0317 13:53:27.651280  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.651289  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:27.651296  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:27.651349  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:27.686897  673643 cri.go:89] found id: ""
	I0317 13:53:27.686926  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.686935  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:27.686943  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:27.687006  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:27.722571  673643 cri.go:89] found id: ""
	I0317 13:53:27.722601  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.722614  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:27.722621  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:27.722687  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:27.758742  673643 cri.go:89] found id: ""
	I0317 13:53:27.758777  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.758789  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:27.758797  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:27.758921  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:27.793028  673643 cri.go:89] found id: ""
	I0317 13:53:27.793058  673643 logs.go:282] 0 containers: []
	W0317 13:53:27.793066  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:27.793076  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:27.793087  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:27.874332  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:27.874376  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:27.912389  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:27.912422  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:27.964130  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:27.964172  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:27.977689  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:27.977727  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:28.046393  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:30.547326  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:30.559417  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:30.559489  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:30.591756  673643 cri.go:89] found id: ""
	I0317 13:53:30.591786  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.591799  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:30.591806  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:30.591881  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:30.626561  673643 cri.go:89] found id: ""
	I0317 13:53:30.626597  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.626607  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:30.626613  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:30.626665  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:30.660486  673643 cri.go:89] found id: ""
	I0317 13:53:30.660517  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.660528  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:30.660536  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:30.660606  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:30.697419  673643 cri.go:89] found id: ""
	I0317 13:53:30.697444  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.697453  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:30.697460  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:30.697510  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:30.732063  673643 cri.go:89] found id: ""
	I0317 13:53:30.732107  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.732118  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:30.732126  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:30.732198  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:30.765947  673643 cri.go:89] found id: ""
	I0317 13:53:30.765983  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.765996  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:30.766005  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:30.766065  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:30.798688  673643 cri.go:89] found id: ""
	I0317 13:53:30.798719  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.798730  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:30.798739  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:30.798813  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:30.831642  673643 cri.go:89] found id: ""
	I0317 13:53:30.831680  673643 logs.go:282] 0 containers: []
	W0317 13:53:30.831697  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:30.831710  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:30.831728  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:30.893552  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:30.893574  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:30.893590  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:30.976870  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:30.976916  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:31.013725  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:31.013765  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:31.064544  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:31.064584  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:33.579506  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:33.592236  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:33.592297  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:33.625436  673643 cri.go:89] found id: ""
	I0317 13:53:33.625462  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.625470  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:33.625477  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:33.625535  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:33.656690  673643 cri.go:89] found id: ""
	I0317 13:53:33.656717  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.656726  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:33.656731  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:33.656783  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:33.689809  673643 cri.go:89] found id: ""
	I0317 13:53:33.689840  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.689848  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:33.689856  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:33.689908  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:33.721405  673643 cri.go:89] found id: ""
	I0317 13:53:33.721439  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.721452  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:33.721460  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:33.721531  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:33.753091  673643 cri.go:89] found id: ""
	I0317 13:53:33.753122  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.753140  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:33.753147  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:33.753205  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:33.783456  673643 cri.go:89] found id: ""
	I0317 13:53:33.783491  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.783508  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:33.783515  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:33.783597  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:33.815569  673643 cri.go:89] found id: ""
	I0317 13:53:33.815607  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.815619  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:33.815628  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:33.815732  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:33.847141  673643 cri.go:89] found id: ""
	I0317 13:53:33.847172  673643 logs.go:282] 0 containers: []
	W0317 13:53:33.847182  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:33.847191  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:33.847205  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:33.883619  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:33.883647  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:33.936754  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:33.936790  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:33.951410  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:33.951436  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:34.015940  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:34.015964  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:34.015980  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:36.593068  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:36.606098  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:36.606165  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:36.640052  673643 cri.go:89] found id: ""
	I0317 13:53:36.640080  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.640089  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:36.640095  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:36.640145  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:36.671454  673643 cri.go:89] found id: ""
	I0317 13:53:36.671484  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.671494  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:36.671500  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:36.671579  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:36.703437  673643 cri.go:89] found id: ""
	I0317 13:53:36.703463  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.703472  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:36.703479  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:36.703553  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:36.740565  673643 cri.go:89] found id: ""
	I0317 13:53:36.740596  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.740608  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:36.740616  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:36.740683  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:36.776448  673643 cri.go:89] found id: ""
	I0317 13:53:36.776475  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.776484  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:36.776490  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:36.776541  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:36.810211  673643 cri.go:89] found id: ""
	I0317 13:53:36.810255  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.810264  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:36.810270  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:36.810323  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:36.842677  673643 cri.go:89] found id: ""
	I0317 13:53:36.842728  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.842739  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:36.842747  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:36.842814  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:36.876052  673643 cri.go:89] found id: ""
	I0317 13:53:36.876080  673643 logs.go:282] 0 containers: []
	W0317 13:53:36.876090  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:36.876101  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:36.876116  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:36.889326  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:36.889355  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:36.950941  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:36.950975  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:36.950993  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:37.028874  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:37.028940  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:37.070898  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:37.070931  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:39.622129  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:39.635936  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:39.636045  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:39.669591  673643 cri.go:89] found id: ""
	I0317 13:53:39.669625  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.669634  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:39.669642  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:39.669712  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:39.703558  673643 cri.go:89] found id: ""
	I0317 13:53:39.703594  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.703602  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:39.703608  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:39.703661  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:39.739739  673643 cri.go:89] found id: ""
	I0317 13:53:39.739770  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.739783  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:39.739790  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:39.739860  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:39.774731  673643 cri.go:89] found id: ""
	I0317 13:53:39.774765  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.774778  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:39.774786  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:39.774840  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:39.807328  673643 cri.go:89] found id: ""
	I0317 13:53:39.807357  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.807367  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:39.807372  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:39.807444  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:39.840108  673643 cri.go:89] found id: ""
	I0317 13:53:39.840140  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.840151  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:39.840159  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:39.840222  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:39.875015  673643 cri.go:89] found id: ""
	I0317 13:53:39.875049  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.875058  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:39.875064  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:39.875130  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:39.908893  673643 cri.go:89] found id: ""
	I0317 13:53:39.908930  673643 logs.go:282] 0 containers: []
	W0317 13:53:39.908942  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:39.908956  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:39.908974  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:39.974924  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:39.974966  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:39.974982  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:40.056028  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:40.056076  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:40.098290  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:40.098323  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:40.149793  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:40.149836  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:42.665768  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:42.678747  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:42.678824  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:42.713381  673643 cri.go:89] found id: ""
	I0317 13:53:42.713424  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.713437  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:42.713445  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:42.713507  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:42.746867  673643 cri.go:89] found id: ""
	I0317 13:53:42.746896  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.746907  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:42.746914  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:42.746979  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:42.778307  673643 cri.go:89] found id: ""
	I0317 13:53:42.778348  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.778357  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:42.778409  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:42.778478  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:42.813302  673643 cri.go:89] found id: ""
	I0317 13:53:42.813341  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.813354  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:42.813362  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:42.813426  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:42.845961  673643 cri.go:89] found id: ""
	I0317 13:53:42.845993  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.846002  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:42.846009  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:42.846062  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:42.878607  673643 cri.go:89] found id: ""
	I0317 13:53:42.878632  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.878640  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:42.878645  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:42.878698  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:42.912220  673643 cri.go:89] found id: ""
	I0317 13:53:42.912250  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.912258  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:42.912265  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:42.912319  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:42.944487  673643 cri.go:89] found id: ""
	I0317 13:53:42.944518  673643 logs.go:282] 0 containers: []
	W0317 13:53:42.944530  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:42.944542  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:42.944558  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:43.016300  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:43.016334  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:43.016351  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:43.090426  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:43.090477  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:43.127844  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:43.127872  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:43.178864  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:43.178900  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:45.693786  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:45.708205  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:45.708276  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:45.746383  673643 cri.go:89] found id: ""
	I0317 13:53:45.746412  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.746423  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:45.746430  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:45.746493  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:45.782050  673643 cri.go:89] found id: ""
	I0317 13:53:45.782083  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.782096  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:45.782104  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:45.782166  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:45.816312  673643 cri.go:89] found id: ""
	I0317 13:53:45.816343  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.816354  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:45.816364  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:45.816435  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:45.847696  673643 cri.go:89] found id: ""
	I0317 13:53:45.847729  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.847737  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:45.847743  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:45.847804  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:45.880773  673643 cri.go:89] found id: ""
	I0317 13:53:45.880797  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.880805  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:45.880810  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:45.880860  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:45.912584  673643 cri.go:89] found id: ""
	I0317 13:53:45.912616  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.912628  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:45.912637  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:45.912700  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:45.944739  673643 cri.go:89] found id: ""
	I0317 13:53:45.944772  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.944784  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:45.944791  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:45.944848  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:45.976183  673643 cri.go:89] found id: ""
	I0317 13:53:45.976210  673643 logs.go:282] 0 containers: []
	W0317 13:53:45.976218  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:45.976227  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:45.976237  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:46.025468  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:46.025501  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:46.038864  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:46.038893  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:46.103046  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:46.103071  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:46.103086  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:46.180296  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:46.180339  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:48.722573  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:48.735718  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:48.735783  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:48.768172  673643 cri.go:89] found id: ""
	I0317 13:53:48.768206  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.768215  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:48.768221  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:48.768286  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:48.800486  673643 cri.go:89] found id: ""
	I0317 13:53:48.800520  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.800532  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:48.800539  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:48.800606  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:48.831799  673643 cri.go:89] found id: ""
	I0317 13:53:48.831836  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.831848  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:48.831855  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:48.831936  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:48.862328  673643 cri.go:89] found id: ""
	I0317 13:53:48.862360  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.862372  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:48.862380  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:48.862445  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:48.895572  673643 cri.go:89] found id: ""
	I0317 13:53:48.895596  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.895604  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:48.895611  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:48.895670  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:48.928618  673643 cri.go:89] found id: ""
	I0317 13:53:48.928648  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.928705  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:48.928721  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:48.928781  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:48.961877  673643 cri.go:89] found id: ""
	I0317 13:53:48.961914  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.961923  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:48.961929  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:48.961987  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:48.993672  673643 cri.go:89] found id: ""
	I0317 13:53:48.993703  673643 logs.go:282] 0 containers: []
	W0317 13:53:48.993714  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:48.993727  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:48.993742  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:49.043882  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:49.043921  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:49.056628  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:49.056658  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:49.122804  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:49.122828  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:49.122841  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:49.205024  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:49.205070  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:51.744230  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:51.756983  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:51.757067  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:51.793904  673643 cri.go:89] found id: ""
	I0317 13:53:51.793931  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.793939  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:51.793945  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:51.793995  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:51.831062  673643 cri.go:89] found id: ""
	I0317 13:53:51.831090  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.831098  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:51.831104  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:51.831166  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:51.863507  673643 cri.go:89] found id: ""
	I0317 13:53:51.863552  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.863566  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:51.863574  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:51.863642  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:51.901819  673643 cri.go:89] found id: ""
	I0317 13:53:51.901852  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.901867  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:51.901875  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:51.901938  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:51.936421  673643 cri.go:89] found id: ""
	I0317 13:53:51.936450  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.936463  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:51.936470  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:51.936530  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:51.966554  673643 cri.go:89] found id: ""
	I0317 13:53:51.966585  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.966596  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:51.966605  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:51.966665  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:51.998059  673643 cri.go:89] found id: ""
	I0317 13:53:51.998090  673643 logs.go:282] 0 containers: []
	W0317 13:53:51.998102  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:51.998110  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:51.998171  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:52.030158  673643 cri.go:89] found id: ""
	I0317 13:53:52.030192  673643 logs.go:282] 0 containers: []
	W0317 13:53:52.030204  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:52.030225  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:52.030240  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:52.104281  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:52.104327  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:52.140921  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:52.140953  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:52.195379  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:52.195423  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:52.209685  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:52.209713  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:52.281557  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:54.782618  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:54.796305  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:54.796379  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:54.833227  673643 cri.go:89] found id: ""
	I0317 13:53:54.833261  673643 logs.go:282] 0 containers: []
	W0317 13:53:54.833274  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:54.833283  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:54.833345  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:54.865502  673643 cri.go:89] found id: ""
	I0317 13:53:54.865542  673643 logs.go:282] 0 containers: []
	W0317 13:53:54.865557  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:54.865565  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:54.865625  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:54.897674  673643 cri.go:89] found id: ""
	I0317 13:53:54.897705  673643 logs.go:282] 0 containers: []
	W0317 13:53:54.897716  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:54.897724  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:54.897791  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:54.931251  673643 cri.go:89] found id: ""
	I0317 13:53:54.931283  673643 logs.go:282] 0 containers: []
	W0317 13:53:54.931293  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:54.931301  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:54.931358  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:54.962019  673643 cri.go:89] found id: ""
	I0317 13:53:54.962045  673643 logs.go:282] 0 containers: []
	W0317 13:53:54.962053  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:54.962058  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:54.962114  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:55.002410  673643 cri.go:89] found id: ""
	I0317 13:53:55.002442  673643 logs.go:282] 0 containers: []
	W0317 13:53:55.002453  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:55.002461  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:55.002530  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:55.034372  673643 cri.go:89] found id: ""
	I0317 13:53:55.034399  673643 logs.go:282] 0 containers: []
	W0317 13:53:55.034407  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:55.034412  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:55.034463  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:55.065111  673643 cri.go:89] found id: ""
	I0317 13:53:55.065145  673643 logs.go:282] 0 containers: []
	W0317 13:53:55.065154  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:55.065164  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:55.065175  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:53:55.101026  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:55.101061  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:55.151613  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:55.151658  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:55.164662  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:55.164690  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:55.233542  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:55.233573  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:55.233591  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:57.813029  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:53:57.826643  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:53:57.826704  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:53:57.860418  673643 cri.go:89] found id: ""
	I0317 13:53:57.860447  673643 logs.go:282] 0 containers: []
	W0317 13:53:57.860456  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:53:57.860462  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:53:57.860516  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:53:57.891662  673643 cri.go:89] found id: ""
	I0317 13:53:57.891689  673643 logs.go:282] 0 containers: []
	W0317 13:53:57.891698  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:53:57.891705  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:53:57.891775  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:53:57.923234  673643 cri.go:89] found id: ""
	I0317 13:53:57.923265  673643 logs.go:282] 0 containers: []
	W0317 13:53:57.923277  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:53:57.923284  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:53:57.923360  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:53:57.957651  673643 cri.go:89] found id: ""
	I0317 13:53:57.957685  673643 logs.go:282] 0 containers: []
	W0317 13:53:57.957697  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:53:57.957706  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:53:57.957773  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:53:57.992638  673643 cri.go:89] found id: ""
	I0317 13:53:57.992668  673643 logs.go:282] 0 containers: []
	W0317 13:53:57.992677  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:53:57.992683  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:53:57.992744  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:53:58.023900  673643 cri.go:89] found id: ""
	I0317 13:53:58.023925  673643 logs.go:282] 0 containers: []
	W0317 13:53:58.023934  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:53:58.023941  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:53:58.024009  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:53:58.055236  673643 cri.go:89] found id: ""
	I0317 13:53:58.055260  673643 logs.go:282] 0 containers: []
	W0317 13:53:58.055271  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:53:58.055277  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:53:58.055328  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:53:58.088490  673643 cri.go:89] found id: ""
	I0317 13:53:58.088519  673643 logs.go:282] 0 containers: []
	W0317 13:53:58.088528  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:53:58.088544  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:53:58.088556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:53:58.138269  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:53:58.138312  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:53:58.151743  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:53:58.151773  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:53:58.223674  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:53:58.223696  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:53:58.223710  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:53:58.305240  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:53:58.305280  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:00.849196  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:00.862553  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:00.862645  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:00.897401  673643 cri.go:89] found id: ""
	I0317 13:54:00.897437  673643 logs.go:282] 0 containers: []
	W0317 13:54:00.897450  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:00.897458  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:00.897526  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:00.934853  673643 cri.go:89] found id: ""
	I0317 13:54:00.934884  673643 logs.go:282] 0 containers: []
	W0317 13:54:00.934895  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:00.934903  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:00.934970  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:00.974081  673643 cri.go:89] found id: ""
	I0317 13:54:00.974118  673643 logs.go:282] 0 containers: []
	W0317 13:54:00.974130  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:00.974138  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:00.974211  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:01.007834  673643 cri.go:89] found id: ""
	I0317 13:54:01.007862  673643 logs.go:282] 0 containers: []
	W0317 13:54:01.007871  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:01.007877  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:01.007929  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:01.041274  673643 cri.go:89] found id: ""
	I0317 13:54:01.041310  673643 logs.go:282] 0 containers: []
	W0317 13:54:01.041321  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:01.041330  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:01.041397  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:01.088246  673643 cri.go:89] found id: ""
	I0317 13:54:01.088284  673643 logs.go:282] 0 containers: []
	W0317 13:54:01.088298  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:01.088306  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:01.088373  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:01.124035  673643 cri.go:89] found id: ""
	I0317 13:54:01.124072  673643 logs.go:282] 0 containers: []
	W0317 13:54:01.124084  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:01.124094  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:01.124157  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:01.159975  673643 cri.go:89] found id: ""
	I0317 13:54:01.160002  673643 logs.go:282] 0 containers: []
	W0317 13:54:01.160014  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:01.160027  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:01.160043  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:01.224545  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:01.224573  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:01.224585  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:01.304549  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:01.304601  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:01.340417  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:01.340454  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:01.391464  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:01.391511  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:03.904428  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:03.917471  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:03.917550  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:03.953617  673643 cri.go:89] found id: ""
	I0317 13:54:03.953645  673643 logs.go:282] 0 containers: []
	W0317 13:54:03.953654  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:03.953660  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:03.953713  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:03.988281  673643 cri.go:89] found id: ""
	I0317 13:54:03.988314  673643 logs.go:282] 0 containers: []
	W0317 13:54:03.988325  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:03.988331  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:03.988400  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:04.021494  673643 cri.go:89] found id: ""
	I0317 13:54:04.021529  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.021540  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:04.021546  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:04.021615  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:04.056951  673643 cri.go:89] found id: ""
	I0317 13:54:04.056982  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.056991  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:04.056996  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:04.057049  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:04.091810  673643 cri.go:89] found id: ""
	I0317 13:54:04.091837  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.091845  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:04.091851  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:04.091899  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:04.121627  673643 cri.go:89] found id: ""
	I0317 13:54:04.121659  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.121667  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:04.121674  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:04.121729  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:04.161163  673643 cri.go:89] found id: ""
	I0317 13:54:04.161192  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.161204  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:04.161211  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:04.161278  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:04.196243  673643 cri.go:89] found id: ""
	I0317 13:54:04.196278  673643 logs.go:282] 0 containers: []
	W0317 13:54:04.196290  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:04.196302  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:04.196315  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:04.250126  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:04.250164  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:04.262964  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:04.262993  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:04.331737  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:04.331763  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:04.331775  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:04.410499  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:04.410535  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:06.948307  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:06.961123  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:06.961183  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:06.993366  673643 cri.go:89] found id: ""
	I0317 13:54:06.993400  673643 logs.go:282] 0 containers: []
	W0317 13:54:06.993411  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:06.993419  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:06.993477  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:07.024167  673643 cri.go:89] found id: ""
	I0317 13:54:07.024201  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.024214  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:07.024223  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:07.024299  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:07.054998  673643 cri.go:89] found id: ""
	I0317 13:54:07.055029  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.055038  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:07.055045  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:07.055099  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:07.092715  673643 cri.go:89] found id: ""
	I0317 13:54:07.092741  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.092749  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:07.092755  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:07.092807  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:07.125495  673643 cri.go:89] found id: ""
	I0317 13:54:07.125523  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.125532  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:07.125538  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:07.125609  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:07.159277  673643 cri.go:89] found id: ""
	I0317 13:54:07.159304  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.159313  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:07.159320  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:07.159384  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:07.197511  673643 cri.go:89] found id: ""
	I0317 13:54:07.197552  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.197572  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:07.197580  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:07.197645  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:07.229151  673643 cri.go:89] found id: ""
	I0317 13:54:07.229176  673643 logs.go:282] 0 containers: []
	W0317 13:54:07.229184  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:07.229195  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:07.229206  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:07.280555  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:07.280605  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:07.295715  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:07.295750  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:07.364625  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:07.364649  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:07.364667  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:07.442009  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:07.442045  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:09.985051  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:09.997960  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:09.998020  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:10.030060  673643 cri.go:89] found id: ""
	I0317 13:54:10.030088  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.030096  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:10.030101  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:10.030160  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:10.063830  673643 cri.go:89] found id: ""
	I0317 13:54:10.063865  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.063886  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:10.063894  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:10.063960  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:10.095070  673643 cri.go:89] found id: ""
	I0317 13:54:10.095104  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.095115  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:10.095122  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:10.095189  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:10.127138  673643 cri.go:89] found id: ""
	I0317 13:54:10.127173  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.127184  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:10.127190  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:10.127247  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:10.165489  673643 cri.go:89] found id: ""
	I0317 13:54:10.165526  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.165535  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:10.165541  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:10.165650  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:10.200332  673643 cri.go:89] found id: ""
	I0317 13:54:10.200367  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.200379  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:10.200387  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:10.200453  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:10.233245  673643 cri.go:89] found id: ""
	I0317 13:54:10.233276  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.233287  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:10.233294  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:10.233361  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:10.264483  673643 cri.go:89] found id: ""
	I0317 13:54:10.264516  673643 logs.go:282] 0 containers: []
	W0317 13:54:10.264527  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:10.264536  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:10.264552  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:10.313514  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:10.313549  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:10.325401  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:10.325435  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:10.393305  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:10.393343  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:10.393360  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:10.470001  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:10.470042  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:13.007222  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:13.021121  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:13.021221  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:13.059621  673643 cri.go:89] found id: ""
	I0317 13:54:13.059659  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.059671  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:13.059680  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:13.059746  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:13.095398  673643 cri.go:89] found id: ""
	I0317 13:54:13.095434  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.095445  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:13.095453  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:13.095521  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:13.136670  673643 cri.go:89] found id: ""
	I0317 13:54:13.136699  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.136709  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:13.136722  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:13.136784  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:13.169755  673643 cri.go:89] found id: ""
	I0317 13:54:13.169781  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.169791  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:13.169799  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:13.169857  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:13.213441  673643 cri.go:89] found id: ""
	I0317 13:54:13.213466  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.213476  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:13.213483  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:13.213546  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:13.258855  673643 cri.go:89] found id: ""
	I0317 13:54:13.258889  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.258901  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:13.258908  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:13.258976  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:13.302925  673643 cri.go:89] found id: ""
	I0317 13:54:13.302958  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.302976  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:13.302982  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:13.303052  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:13.348752  673643 cri.go:89] found id: ""
	I0317 13:54:13.348787  673643 logs.go:282] 0 containers: []
	W0317 13:54:13.348799  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:13.348811  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:13.348827  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:13.361566  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:13.361659  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:13.437370  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:13.437391  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:13.437402  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:13.519355  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:13.519393  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:13.555970  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:13.555999  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:16.106301  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:16.119465  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:16.119556  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:16.149802  673643 cri.go:89] found id: ""
	I0317 13:54:16.149830  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.149839  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:16.149845  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:16.149897  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:16.181218  673643 cri.go:89] found id: ""
	I0317 13:54:16.181248  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.181258  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:16.181275  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:16.181343  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:16.213461  673643 cri.go:89] found id: ""
	I0317 13:54:16.213488  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.213498  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:16.213510  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:16.213573  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:16.245182  673643 cri.go:89] found id: ""
	I0317 13:54:16.245210  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.245222  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:16.245233  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:16.245316  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:16.276330  673643 cri.go:89] found id: ""
	I0317 13:54:16.276367  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.276379  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:16.276387  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:16.276469  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:16.310878  673643 cri.go:89] found id: ""
	I0317 13:54:16.310913  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.310925  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:16.310933  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:16.311004  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:16.348459  673643 cri.go:89] found id: ""
	I0317 13:54:16.348492  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.348506  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:16.348515  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:16.348594  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:16.381094  673643 cri.go:89] found id: ""
	I0317 13:54:16.381128  673643 logs.go:282] 0 containers: []
	W0317 13:54:16.381141  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:16.381155  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:16.381171  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:16.415310  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:16.415339  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:16.469560  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:16.469600  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:16.482858  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:16.482890  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:16.547845  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:16.547870  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:16.547882  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:19.132734  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:19.145166  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:19.145226  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:19.176458  673643 cri.go:89] found id: ""
	I0317 13:54:19.176493  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.176505  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:19.176513  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:19.176585  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:19.211257  673643 cri.go:89] found id: ""
	I0317 13:54:19.211298  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.211308  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:19.211317  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:19.211389  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:19.247881  673643 cri.go:89] found id: ""
	I0317 13:54:19.247913  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.247922  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:19.247929  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:19.247990  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:19.283462  673643 cri.go:89] found id: ""
	I0317 13:54:19.283490  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.283514  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:19.283522  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:19.283608  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:19.321089  673643 cri.go:89] found id: ""
	I0317 13:54:19.321129  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.321141  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:19.321149  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:19.321221  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:19.354099  673643 cri.go:89] found id: ""
	I0317 13:54:19.354127  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.354136  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:19.354144  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:19.354196  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:19.386846  673643 cri.go:89] found id: ""
	I0317 13:54:19.386885  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.386898  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:19.386907  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:19.386965  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:19.428103  673643 cri.go:89] found id: ""
	I0317 13:54:19.428136  673643 logs.go:282] 0 containers: []
	W0317 13:54:19.428148  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:19.428161  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:19.428175  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:19.478028  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:19.478069  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:19.491728  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:19.491757  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:19.563983  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:19.564012  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:19.564028  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:19.641961  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:19.641998  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:22.183664  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:22.196209  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:22.196293  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:22.229040  673643 cri.go:89] found id: ""
	I0317 13:54:22.229076  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.229088  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:22.229096  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:22.229160  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:22.262438  673643 cri.go:89] found id: ""
	I0317 13:54:22.262470  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.262481  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:22.262489  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:22.262565  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:22.297091  673643 cri.go:89] found id: ""
	I0317 13:54:22.297125  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.297136  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:22.297142  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:22.297205  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:22.335142  673643 cri.go:89] found id: ""
	I0317 13:54:22.335177  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.335190  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:22.335198  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:22.335281  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:22.373189  673643 cri.go:89] found id: ""
	I0317 13:54:22.373225  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.373237  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:22.373246  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:22.373328  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:22.405912  673643 cri.go:89] found id: ""
	I0317 13:54:22.405942  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.405954  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:22.405962  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:22.406021  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:22.441120  673643 cri.go:89] found id: ""
	I0317 13:54:22.441154  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.441166  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:22.441174  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:22.441256  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:22.475779  673643 cri.go:89] found id: ""
	I0317 13:54:22.475816  673643 logs.go:282] 0 containers: []
	W0317 13:54:22.475830  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:22.475842  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:22.475856  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:22.531909  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:22.531952  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:22.545150  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:22.545188  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:22.613957  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:22.613987  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:22.614005  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:22.696246  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:22.696295  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:25.235702  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:25.248433  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:25.248494  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:25.281113  673643 cri.go:89] found id: ""
	I0317 13:54:25.281143  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.281151  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:25.281157  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:25.281214  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:25.319120  673643 cri.go:89] found id: ""
	I0317 13:54:25.319149  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.319157  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:25.319166  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:25.319226  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:25.350399  673643 cri.go:89] found id: ""
	I0317 13:54:25.350434  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.350446  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:25.350455  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:25.350518  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:25.385323  673643 cri.go:89] found id: ""
	I0317 13:54:25.385358  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.385370  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:25.385378  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:25.385442  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:25.416989  673643 cri.go:89] found id: ""
	I0317 13:54:25.417019  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.417031  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:25.417041  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:25.417105  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:25.450935  673643 cri.go:89] found id: ""
	I0317 13:54:25.450969  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.450981  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:25.450989  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:25.451055  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:25.483800  673643 cri.go:89] found id: ""
	I0317 13:54:25.483834  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.483845  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:25.483852  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:25.483923  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:25.524541  673643 cri.go:89] found id: ""
	I0317 13:54:25.524572  673643 logs.go:282] 0 containers: []
	W0317 13:54:25.524583  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:25.524595  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:25.524611  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:25.560590  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:25.560619  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:25.610101  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:25.610141  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:25.623686  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:25.623720  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:25.694465  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:25.694491  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:25.694503  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:28.271696  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:28.285476  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:28.285549  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:28.320334  673643 cri.go:89] found id: ""
	I0317 13:54:28.320369  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.320382  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:28.320390  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:28.320454  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:28.353615  673643 cri.go:89] found id: ""
	I0317 13:54:28.353652  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.353666  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:28.353730  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:28.353802  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:28.387201  673643 cri.go:89] found id: ""
	I0317 13:54:28.387228  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.387236  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:28.387242  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:28.387303  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:28.430978  673643 cri.go:89] found id: ""
	I0317 13:54:28.431012  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.431023  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:28.431030  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:28.431103  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:28.461859  673643 cri.go:89] found id: ""
	I0317 13:54:28.461888  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.461899  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:28.461906  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:28.461974  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:28.496539  673643 cri.go:89] found id: ""
	I0317 13:54:28.496566  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.496575  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:28.496582  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:28.496642  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:28.532954  673643 cri.go:89] found id: ""
	I0317 13:54:28.532987  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.532998  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:28.533006  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:28.533073  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:28.572990  673643 cri.go:89] found id: ""
	I0317 13:54:28.573017  673643 logs.go:282] 0 containers: []
	W0317 13:54:28.573027  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:28.573036  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:28.573049  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:28.611265  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:28.611310  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:28.665994  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:28.666035  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:28.679495  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:28.679549  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:28.745938  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:28.745969  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:28.745985  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:31.323245  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:31.335406  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:31.335492  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:31.368809  673643 cri.go:89] found id: ""
	I0317 13:54:31.368843  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.368856  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:31.368864  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:31.368927  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:31.402154  673643 cri.go:89] found id: ""
	I0317 13:54:31.402186  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.402198  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:31.402206  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:31.402286  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:31.437597  673643 cri.go:89] found id: ""
	I0317 13:54:31.437633  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.437645  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:31.437653  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:31.437720  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:31.471640  673643 cri.go:89] found id: ""
	I0317 13:54:31.471678  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.471692  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:31.471700  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:31.471758  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:31.508162  673643 cri.go:89] found id: ""
	I0317 13:54:31.508196  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.508207  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:31.508215  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:31.508294  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:31.540397  673643 cri.go:89] found id: ""
	I0317 13:54:31.540431  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.540443  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:31.540451  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:31.540514  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:31.571588  673643 cri.go:89] found id: ""
	I0317 13:54:31.571621  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.571632  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:31.571644  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:31.571700  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:31.602860  673643 cri.go:89] found id: ""
	I0317 13:54:31.602899  673643 logs.go:282] 0 containers: []
	W0317 13:54:31.602911  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:31.602925  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:31.602940  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:31.655521  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:31.655570  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:31.669264  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:31.669295  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:31.741422  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:31.741448  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:31.741462  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:31.823462  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:31.823507  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:34.360671  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:34.374173  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:34.374238  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:34.407096  673643 cri.go:89] found id: ""
	I0317 13:54:34.407129  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.407138  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:34.407144  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:34.407194  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:34.441311  673643 cri.go:89] found id: ""
	I0317 13:54:34.441355  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.441378  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:34.441386  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:34.441442  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:34.476081  673643 cri.go:89] found id: ""
	I0317 13:54:34.476112  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.476121  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:34.476127  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:34.476188  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:34.507855  673643 cri.go:89] found id: ""
	I0317 13:54:34.507888  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.507901  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:34.507911  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:34.507977  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:34.539557  673643 cri.go:89] found id: ""
	I0317 13:54:34.539590  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.539600  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:34.539606  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:34.539665  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:34.574047  673643 cri.go:89] found id: ""
	I0317 13:54:34.574081  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.574094  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:34.574102  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:34.574161  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:34.606359  673643 cri.go:89] found id: ""
	I0317 13:54:34.606401  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.606414  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:34.606422  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:34.606479  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:34.639587  673643 cri.go:89] found id: ""
	I0317 13:54:34.639625  673643 logs.go:282] 0 containers: []
	W0317 13:54:34.639638  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:34.639651  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:34.639664  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:34.677108  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:34.677136  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:34.728996  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:34.729040  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:34.744041  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:34.744070  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:34.807176  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:34.807205  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:34.807223  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:37.387775  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:37.406124  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:37.406215  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:37.442197  673643 cri.go:89] found id: ""
	I0317 13:54:37.442230  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.442240  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:37.442247  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:37.442309  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:37.488513  673643 cri.go:89] found id: ""
	I0317 13:54:37.488541  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.488561  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:37.488569  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:37.488633  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:37.526111  673643 cri.go:89] found id: ""
	I0317 13:54:37.526145  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.526154  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:37.526160  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:37.526241  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:37.559075  673643 cri.go:89] found id: ""
	I0317 13:54:37.559104  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.559113  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:37.559119  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:37.559174  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:37.597036  673643 cri.go:89] found id: ""
	I0317 13:54:37.597069  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.597080  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:37.597091  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:37.597151  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:37.634900  673643 cri.go:89] found id: ""
	I0317 13:54:37.634927  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.634940  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:37.634953  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:37.635017  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:37.679261  673643 cri.go:89] found id: ""
	I0317 13:54:37.679289  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.679298  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:37.679306  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:37.679364  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:37.736163  673643 cri.go:89] found id: ""
	I0317 13:54:37.736190  673643 logs.go:282] 0 containers: []
	W0317 13:54:37.736202  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:37.736214  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:37.736232  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:37.778217  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:37.778248  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:37.836049  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:37.836085  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:37.849892  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:37.849920  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:37.923499  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:37.923523  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:37.923550  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:40.511361  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:40.525479  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:40.525551  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:40.560543  673643 cri.go:89] found id: ""
	I0317 13:54:40.560575  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.560597  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:40.560606  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:40.560665  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:40.597732  673643 cri.go:89] found id: ""
	I0317 13:54:40.597759  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.597770  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:40.597778  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:40.597842  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:40.634790  673643 cri.go:89] found id: ""
	I0317 13:54:40.634825  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.634838  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:40.634848  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:40.635087  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:40.671555  673643 cri.go:89] found id: ""
	I0317 13:54:40.671587  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.671604  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:40.671617  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:40.671681  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:40.704159  673643 cri.go:89] found id: ""
	I0317 13:54:40.704187  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.704201  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:40.704209  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:40.704275  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:40.735782  673643 cri.go:89] found id: ""
	I0317 13:54:40.735814  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.735825  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:40.735833  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:40.735904  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:40.767296  673643 cri.go:89] found id: ""
	I0317 13:54:40.767333  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.767345  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:40.767353  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:40.767425  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:40.801618  673643 cri.go:89] found id: ""
	I0317 13:54:40.801646  673643 logs.go:282] 0 containers: []
	W0317 13:54:40.801657  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:40.801669  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:40.801684  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:40.837492  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:40.837524  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:40.896726  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:40.896770  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:40.909672  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:40.909696  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:40.981256  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:40.981281  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:40.981294  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:43.561092  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:43.574259  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:43.574346  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:43.607626  673643 cri.go:89] found id: ""
	I0317 13:54:43.607660  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.607673  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:43.607681  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:43.607757  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:43.640378  673643 cri.go:89] found id: ""
	I0317 13:54:43.640410  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.640419  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:43.640425  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:43.640491  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:43.672653  673643 cri.go:89] found id: ""
	I0317 13:54:43.672686  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.672698  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:43.672706  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:43.672774  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:43.707222  673643 cri.go:89] found id: ""
	I0317 13:54:43.707254  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.707263  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:43.707268  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:43.707345  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:43.756092  673643 cri.go:89] found id: ""
	I0317 13:54:43.756129  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.756141  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:43.756149  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:43.756214  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:43.790661  673643 cri.go:89] found id: ""
	I0317 13:54:43.790694  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.790706  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:43.790715  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:43.790792  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:43.823910  673643 cri.go:89] found id: ""
	I0317 13:54:43.823938  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.823948  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:43.823954  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:43.824006  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:43.857192  673643 cri.go:89] found id: ""
	I0317 13:54:43.857233  673643 logs.go:282] 0 containers: []
	W0317 13:54:43.857245  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:43.857260  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:43.857275  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:43.893626  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:43.893658  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:43.954811  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:43.954852  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:43.971616  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:43.971659  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:44.042079  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:44.042107  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:44.042120  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:46.630173  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:46.642933  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:46.642999  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:46.673212  673643 cri.go:89] found id: ""
	I0317 13:54:46.673246  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.673258  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:46.673266  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:46.673332  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:46.705803  673643 cri.go:89] found id: ""
	I0317 13:54:46.705832  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.705845  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:46.705853  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:46.705917  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:46.739713  673643 cri.go:89] found id: ""
	I0317 13:54:46.739744  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.739756  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:46.739764  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:46.739823  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:46.774444  673643 cri.go:89] found id: ""
	I0317 13:54:46.774475  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.774488  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:46.774498  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:46.774567  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:46.805757  673643 cri.go:89] found id: ""
	I0317 13:54:46.805792  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.805805  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:46.805813  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:46.805889  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:46.836336  673643 cri.go:89] found id: ""
	I0317 13:54:46.836371  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.836385  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:46.836393  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:46.836457  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:46.871798  673643 cri.go:89] found id: ""
	I0317 13:54:46.871827  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.871836  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:46.871842  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:46.871907  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:46.910850  673643 cri.go:89] found id: ""
	I0317 13:54:46.910887  673643 logs.go:282] 0 containers: []
	W0317 13:54:46.910900  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:46.910915  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:46.910931  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:46.961169  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:46.961211  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:46.974388  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:46.974418  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:47.037554  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:47.037583  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:47.037607  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:47.114845  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:47.114894  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:49.651921  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:49.664882  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:49.664962  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:49.696140  673643 cri.go:89] found id: ""
	I0317 13:54:49.696177  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.696189  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:49.696198  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:49.696268  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:49.740241  673643 cri.go:89] found id: ""
	I0317 13:54:49.740281  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.740292  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:49.740299  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:49.740360  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:49.777129  673643 cri.go:89] found id: ""
	I0317 13:54:49.777166  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.777178  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:49.777186  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:49.777252  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:49.811789  673643 cri.go:89] found id: ""
	I0317 13:54:49.811828  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.811837  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:49.811844  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:49.811915  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:49.850879  673643 cri.go:89] found id: ""
	I0317 13:54:49.850915  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.850929  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:49.850937  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:49.851016  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:49.887952  673643 cri.go:89] found id: ""
	I0317 13:54:49.887982  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.887993  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:49.888001  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:49.888070  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:49.924909  673643 cri.go:89] found id: ""
	I0317 13:54:49.924945  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.924956  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:49.924964  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:49.925046  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:49.965439  673643 cri.go:89] found id: ""
	I0317 13:54:49.965469  673643 logs.go:282] 0 containers: []
	W0317 13:54:49.965481  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:49.965493  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:49.965515  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:49.978738  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:49.978787  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:50.044160  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:50.044185  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:50.044201  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:50.138759  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:50.138808  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:50.180277  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:50.180320  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:52.739654  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:52.752748  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:52.752818  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:52.791915  673643 cri.go:89] found id: ""
	I0317 13:54:52.791950  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.791962  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:52.791970  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:52.792029  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:52.832043  673643 cri.go:89] found id: ""
	I0317 13:54:52.832070  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.832077  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:52.832083  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:52.832138  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:52.874328  673643 cri.go:89] found id: ""
	I0317 13:54:52.874358  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.874366  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:52.874373  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:52.874430  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:52.906768  673643 cri.go:89] found id: ""
	I0317 13:54:52.906801  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.906813  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:52.906821  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:52.906883  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:52.941205  673643 cri.go:89] found id: ""
	I0317 13:54:52.941241  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.941253  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:52.941261  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:52.941327  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:52.971499  673643 cri.go:89] found id: ""
	I0317 13:54:52.971561  673643 logs.go:282] 0 containers: []
	W0317 13:54:52.971573  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:52.971579  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:52.971646  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:53.002624  673643 cri.go:89] found id: ""
	I0317 13:54:53.002653  673643 logs.go:282] 0 containers: []
	W0317 13:54:53.002662  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:53.002669  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:53.002719  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:53.035200  673643 cri.go:89] found id: ""
	I0317 13:54:53.035232  673643 logs.go:282] 0 containers: []
	W0317 13:54:53.035244  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:53.035258  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:53.035270  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:53.085278  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:53.085321  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:53.098083  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:53.098114  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:53.164104  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:53.164129  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:53.164141  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:53.244158  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:53.244205  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:55.790453  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:55.804884  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:55.804946  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:55.841229  673643 cri.go:89] found id: ""
	I0317 13:54:55.841262  673643 logs.go:282] 0 containers: []
	W0317 13:54:55.841274  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:55.841282  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:55.841356  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:55.877941  673643 cri.go:89] found id: ""
	I0317 13:54:55.877973  673643 logs.go:282] 0 containers: []
	W0317 13:54:55.877986  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:55.877994  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:55.878058  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:55.907940  673643 cri.go:89] found id: ""
	I0317 13:54:55.907976  673643 logs.go:282] 0 containers: []
	W0317 13:54:55.907988  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:55.907996  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:55.908052  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:55.941702  673643 cri.go:89] found id: ""
	I0317 13:54:55.941731  673643 logs.go:282] 0 containers: []
	W0317 13:54:55.941739  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:55.941745  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:55.941795  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:55.972753  673643 cri.go:89] found id: ""
	I0317 13:54:55.972783  673643 logs.go:282] 0 containers: []
	W0317 13:54:55.972794  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:55.972803  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:55.972871  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:56.006056  673643 cri.go:89] found id: ""
	I0317 13:54:56.006089  673643 logs.go:282] 0 containers: []
	W0317 13:54:56.006101  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:56.006108  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:56.006176  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:56.038383  673643 cri.go:89] found id: ""
	I0317 13:54:56.038418  673643 logs.go:282] 0 containers: []
	W0317 13:54:56.038430  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:56.038438  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:56.038512  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:56.069713  673643 cri.go:89] found id: ""
	I0317 13:54:56.069744  673643 logs.go:282] 0 containers: []
	W0317 13:54:56.069753  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:56.069762  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:56.069776  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:56.120949  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:56.120984  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:56.134855  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:56.134882  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:56.204313  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:56.204342  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:56.204354  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:56.284969  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:56.285006  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:54:58.824135  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:54:58.836679  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:54:58.836751  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:54:58.871638  673643 cri.go:89] found id: ""
	I0317 13:54:58.871667  673643 logs.go:282] 0 containers: []
	W0317 13:54:58.871676  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:54:58.871682  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:54:58.871734  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:54:58.910529  673643 cri.go:89] found id: ""
	I0317 13:54:58.910559  673643 logs.go:282] 0 containers: []
	W0317 13:54:58.910567  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:54:58.910574  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:54:58.910624  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:54:58.946657  673643 cri.go:89] found id: ""
	I0317 13:54:58.946691  673643 logs.go:282] 0 containers: []
	W0317 13:54:58.946704  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:54:58.946713  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:54:58.946782  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:54:58.984640  673643 cri.go:89] found id: ""
	I0317 13:54:58.984668  673643 logs.go:282] 0 containers: []
	W0317 13:54:58.984679  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:54:58.984687  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:54:58.984754  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:54:59.021590  673643 cri.go:89] found id: ""
	I0317 13:54:59.021619  673643 logs.go:282] 0 containers: []
	W0317 13:54:59.021627  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:54:59.021633  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:54:59.021682  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:54:59.062848  673643 cri.go:89] found id: ""
	I0317 13:54:59.062877  673643 logs.go:282] 0 containers: []
	W0317 13:54:59.062886  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:54:59.062893  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:54:59.062949  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:54:59.097644  673643 cri.go:89] found id: ""
	I0317 13:54:59.097683  673643 logs.go:282] 0 containers: []
	W0317 13:54:59.097696  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:54:59.097704  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:54:59.097781  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:54:59.130694  673643 cri.go:89] found id: ""
	I0317 13:54:59.130722  673643 logs.go:282] 0 containers: []
	W0317 13:54:59.130730  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:54:59.130740  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:54:59.130752  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:54:59.183031  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:54:59.183065  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:54:59.196184  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:54:59.196213  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:54:59.270576  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:54:59.270605  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:54:59.270620  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:54:59.346517  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:54:59.346556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:01.887121  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:01.900982  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:01.901044  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:01.939659  673643 cri.go:89] found id: ""
	I0317 13:55:01.939688  673643 logs.go:282] 0 containers: []
	W0317 13:55:01.939696  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:01.939702  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:01.939768  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:01.976229  673643 cri.go:89] found id: ""
	I0317 13:55:01.976260  673643 logs.go:282] 0 containers: []
	W0317 13:55:01.976268  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:01.976275  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:01.976338  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:02.012478  673643 cri.go:89] found id: ""
	I0317 13:55:02.012507  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.012515  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:02.012521  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:02.012584  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:02.050858  673643 cri.go:89] found id: ""
	I0317 13:55:02.050899  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.050913  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:02.050922  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:02.051007  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:02.088206  673643 cri.go:89] found id: ""
	I0317 13:55:02.088235  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.088243  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:02.088249  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:02.088312  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:02.124564  673643 cri.go:89] found id: ""
	I0317 13:55:02.124600  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.124613  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:02.124622  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:02.124695  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:02.156619  673643 cri.go:89] found id: ""
	I0317 13:55:02.156647  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.156655  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:02.156661  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:02.156709  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:02.189586  673643 cri.go:89] found id: ""
	I0317 13:55:02.189617  673643 logs.go:282] 0 containers: []
	W0317 13:55:02.189629  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:02.189641  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:02.189655  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:02.237775  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:02.237810  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:02.251366  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:02.251397  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:02.316570  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:02.316599  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:02.316615  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:02.394935  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:02.394974  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:04.931447  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:04.943610  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:04.943680  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:04.983939  673643 cri.go:89] found id: ""
	I0317 13:55:04.983965  673643 logs.go:282] 0 containers: []
	W0317 13:55:04.983977  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:04.983984  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:04.984039  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:05.041935  673643 cri.go:89] found id: ""
	I0317 13:55:05.041975  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.041988  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:05.041996  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:05.042061  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:05.088288  673643 cri.go:89] found id: ""
	I0317 13:55:05.088318  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.088330  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:05.088338  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:05.088407  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:05.121165  673643 cri.go:89] found id: ""
	I0317 13:55:05.121189  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.121199  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:05.121206  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:05.121262  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:05.154173  673643 cri.go:89] found id: ""
	I0317 13:55:05.154200  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.154209  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:05.154217  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:05.154272  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:05.189769  673643 cri.go:89] found id: ""
	I0317 13:55:05.189798  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.189809  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:05.189816  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:05.189879  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:05.224704  673643 cri.go:89] found id: ""
	I0317 13:55:05.224740  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.224752  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:05.224761  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:05.224826  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:05.256379  673643 cri.go:89] found id: ""
	I0317 13:55:05.256409  673643 logs.go:282] 0 containers: []
	W0317 13:55:05.256421  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:05.256433  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:05.256449  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:05.269155  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:05.269231  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:05.338698  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:05.338719  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:05.338731  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:05.427048  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:05.427080  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:05.468653  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:05.468687  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:08.028725  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:08.045770  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:08.045849  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:08.092825  673643 cri.go:89] found id: ""
	I0317 13:55:08.092862  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.092875  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:08.092884  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:08.092962  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:08.137621  673643 cri.go:89] found id: ""
	I0317 13:55:08.137649  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.137661  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:08.137668  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:08.137742  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:08.186042  673643 cri.go:89] found id: ""
	I0317 13:55:08.186066  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.186074  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:08.186080  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:08.186135  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:08.232174  673643 cri.go:89] found id: ""
	I0317 13:55:08.232197  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.232206  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:08.232218  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:08.232272  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:08.277533  673643 cri.go:89] found id: ""
	I0317 13:55:08.277579  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.277591  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:08.277601  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:08.277686  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:08.316553  673643 cri.go:89] found id: ""
	I0317 13:55:08.316581  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.316590  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:08.316598  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:08.316672  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:08.356632  673643 cri.go:89] found id: ""
	I0317 13:55:08.356669  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.356682  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:08.356690  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:08.356764  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:08.404218  673643 cri.go:89] found id: ""
	I0317 13:55:08.404251  673643 logs.go:282] 0 containers: []
	W0317 13:55:08.404264  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:08.404277  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:08.404296  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:08.473094  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:08.473122  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:08.473143  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:08.569805  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:08.569849  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:08.619886  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:08.619924  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:08.691960  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:08.692012  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:11.207659  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:11.224705  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:11.224794  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:11.269997  673643 cri.go:89] found id: ""
	I0317 13:55:11.270045  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.270058  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:11.270065  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:11.270149  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:11.311295  673643 cri.go:89] found id: ""
	I0317 13:55:11.311326  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.311337  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:11.311344  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:11.311406  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:11.345396  673643 cri.go:89] found id: ""
	I0317 13:55:11.345429  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.345441  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:11.345448  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:11.345503  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:11.389165  673643 cri.go:89] found id: ""
	I0317 13:55:11.389199  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.389211  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:11.389219  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:11.389284  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:11.434910  673643 cri.go:89] found id: ""
	I0317 13:55:11.434990  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.435022  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:11.435032  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:11.435104  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:11.470169  673643 cri.go:89] found id: ""
	I0317 13:55:11.470199  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.470209  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:11.470226  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:11.470284  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:11.505515  673643 cri.go:89] found id: ""
	I0317 13:55:11.505554  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.505575  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:11.505584  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:11.505663  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:11.539483  673643 cri.go:89] found id: ""
	I0317 13:55:11.539519  673643 logs.go:282] 0 containers: []
	W0317 13:55:11.539558  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:11.539573  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:11.539589  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:11.594462  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:11.594503  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:11.612510  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:11.612545  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:11.685554  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:11.685587  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:11.685602  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:11.775457  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:11.775496  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:14.325384  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:14.338190  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:14.338266  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:14.370555  673643 cri.go:89] found id: ""
	I0317 13:55:14.370587  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.370598  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:14.370606  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:14.370668  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:14.404093  673643 cri.go:89] found id: ""
	I0317 13:55:14.404119  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.404127  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:14.404133  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:14.404186  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:14.440098  673643 cri.go:89] found id: ""
	I0317 13:55:14.440135  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.440149  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:14.440157  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:14.440224  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:14.477804  673643 cri.go:89] found id: ""
	I0317 13:55:14.477835  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.477844  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:14.477850  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:14.477917  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:14.511653  673643 cri.go:89] found id: ""
	I0317 13:55:14.511688  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.511697  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:14.511703  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:14.511762  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:14.553829  673643 cri.go:89] found id: ""
	I0317 13:55:14.553857  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.553866  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:14.553872  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:14.553933  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:14.594499  673643 cri.go:89] found id: ""
	I0317 13:55:14.594532  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.594547  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:14.594556  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:14.594625  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:14.629155  673643 cri.go:89] found id: ""
	I0317 13:55:14.629193  673643 logs.go:282] 0 containers: []
	W0317 13:55:14.629204  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:14.629216  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:14.629229  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:14.683759  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:14.683836  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:14.697850  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:14.697887  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:14.770749  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:14.770779  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:14.770797  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:14.854198  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:14.854259  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:17.397573  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:17.414727  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:17.414812  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:17.460011  673643 cri.go:89] found id: ""
	I0317 13:55:17.460042  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.460054  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:17.460062  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:17.460131  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:17.500135  673643 cri.go:89] found id: ""
	I0317 13:55:17.500171  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.500184  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:17.500191  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:17.500253  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:17.535018  673643 cri.go:89] found id: ""
	I0317 13:55:17.535044  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.535055  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:17.535063  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:17.535116  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:17.577062  673643 cri.go:89] found id: ""
	I0317 13:55:17.577093  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.577104  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:17.577112  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:17.577175  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:17.616730  673643 cri.go:89] found id: ""
	I0317 13:55:17.616760  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.616771  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:17.616783  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:17.616852  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:17.653775  673643 cri.go:89] found id: ""
	I0317 13:55:17.653812  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.653825  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:17.653834  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:17.653898  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:17.689575  673643 cri.go:89] found id: ""
	I0317 13:55:17.689611  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.689626  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:17.689634  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:17.689695  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:17.719969  673643 cri.go:89] found id: ""
	I0317 13:55:17.720000  673643 logs.go:282] 0 containers: []
	W0317 13:55:17.720009  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:17.720026  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:17.720040  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:17.764244  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:17.764287  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:17.837767  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:17.837820  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:17.854321  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:17.854368  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:17.925794  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:17.925821  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:17.925836  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:20.513056  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:20.527773  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:20.527861  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:20.563526  673643 cri.go:89] found id: ""
	I0317 13:55:20.563582  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.563595  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:20.563605  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:20.563689  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:20.598319  673643 cri.go:89] found id: ""
	I0317 13:55:20.598352  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.598364  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:20.598372  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:20.598439  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:20.640226  673643 cri.go:89] found id: ""
	I0317 13:55:20.640255  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.640267  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:20.640283  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:20.640348  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:20.682027  673643 cri.go:89] found id: ""
	I0317 13:55:20.682076  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.682090  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:20.682099  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:20.682173  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:20.720767  673643 cri.go:89] found id: ""
	I0317 13:55:20.720800  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.720812  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:20.720820  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:20.720886  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:20.757577  673643 cri.go:89] found id: ""
	I0317 13:55:20.757612  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.757627  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:20.757636  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:20.757702  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:20.794447  673643 cri.go:89] found id: ""
	I0317 13:55:20.794490  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.794502  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:20.794510  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:20.794590  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:20.830686  673643 cri.go:89] found id: ""
	I0317 13:55:20.830715  673643 logs.go:282] 0 containers: []
	W0317 13:55:20.830723  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:20.830735  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:20.830751  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:20.846279  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:20.846334  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:20.921111  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:20.921138  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:20.921154  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:21.008979  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:21.009033  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:21.050774  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:21.050816  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:23.606421  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:23.620252  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:23.620351  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:23.657857  673643 cri.go:89] found id: ""
	I0317 13:55:23.657892  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.657903  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:23.657912  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:23.657975  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:23.693729  673643 cri.go:89] found id: ""
	I0317 13:55:23.693776  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.693788  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:23.693797  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:23.693867  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:23.731668  673643 cri.go:89] found id: ""
	I0317 13:55:23.731710  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.731722  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:23.731731  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:23.731800  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:23.765731  673643 cri.go:89] found id: ""
	I0317 13:55:23.765760  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.765770  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:23.765778  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:23.765846  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:23.804968  673643 cri.go:89] found id: ""
	I0317 13:55:23.805004  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.805016  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:23.805025  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:23.805125  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:23.840978  673643 cri.go:89] found id: ""
	I0317 13:55:23.841012  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.841023  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:23.841032  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:23.841100  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:23.877931  673643 cri.go:89] found id: ""
	I0317 13:55:23.877970  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.877982  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:23.877991  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:23.878056  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:23.916385  673643 cri.go:89] found id: ""
	I0317 13:55:23.916415  673643 logs.go:282] 0 containers: []
	W0317 13:55:23.916428  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:23.916440  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:23.916456  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:23.933810  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:23.933842  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:24.006401  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:24.006430  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:24.006451  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:24.115861  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:24.115901  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:24.165088  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:24.165132  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:26.719662  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:26.735089  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:26.735172  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:26.773074  673643 cri.go:89] found id: ""
	I0317 13:55:26.773106  673643 logs.go:282] 0 containers: []
	W0317 13:55:26.773117  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:26.773126  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:26.773200  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:26.820406  673643 cri.go:89] found id: ""
	I0317 13:55:26.820438  673643 logs.go:282] 0 containers: []
	W0317 13:55:26.820450  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:26.820457  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:26.820522  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:26.862724  673643 cri.go:89] found id: ""
	I0317 13:55:26.862762  673643 logs.go:282] 0 containers: []
	W0317 13:55:26.862776  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:26.862785  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:26.862854  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:26.909435  673643 cri.go:89] found id: ""
	I0317 13:55:26.909463  673643 logs.go:282] 0 containers: []
	W0317 13:55:26.909475  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:26.909482  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:26.909544  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:26.972508  673643 cri.go:89] found id: ""
	I0317 13:55:26.972527  673643 logs.go:282] 0 containers: []
	W0317 13:55:26.972537  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:26.972545  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:26.972611  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:27.032760  673643 cri.go:89] found id: ""
	I0317 13:55:27.032784  673643 logs.go:282] 0 containers: []
	W0317 13:55:27.032793  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:27.032802  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:27.032862  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:27.071321  673643 cri.go:89] found id: ""
	I0317 13:55:27.071352  673643 logs.go:282] 0 containers: []
	W0317 13:55:27.071363  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:27.071371  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:27.071439  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:27.115859  673643 cri.go:89] found id: ""
	I0317 13:55:27.115900  673643 logs.go:282] 0 containers: []
	W0317 13:55:27.115913  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:27.115926  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:27.115942  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:27.187961  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:27.188016  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:27.205617  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:27.205770  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:27.306444  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:27.306470  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:27.306496  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:27.442162  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:27.442204  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:29.992334  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:30.005382  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:30.005463  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:30.037447  673643 cri.go:89] found id: ""
	I0317 13:55:30.037482  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.037491  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:30.037498  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:30.037581  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:30.071062  673643 cri.go:89] found id: ""
	I0317 13:55:30.071092  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.071101  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:30.071107  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:30.071170  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:30.105232  673643 cri.go:89] found id: ""
	I0317 13:55:30.105271  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.105285  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:30.105302  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:30.105371  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:30.136974  673643 cri.go:89] found id: ""
	I0317 13:55:30.137003  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.137012  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:30.137017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:30.137069  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:30.170425  673643 cri.go:89] found id: ""
	I0317 13:55:30.170455  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.170467  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:30.170474  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:30.170538  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:30.200158  673643 cri.go:89] found id: ""
	I0317 13:55:30.200188  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.200199  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:30.200208  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:30.200272  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:30.232462  673643 cri.go:89] found id: ""
	I0317 13:55:30.232510  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.232524  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:30.232532  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:30.232598  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:30.267473  673643 cri.go:89] found id: ""
	I0317 13:55:30.267505  673643 logs.go:282] 0 containers: []
	W0317 13:55:30.267518  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:30.267546  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:30.267568  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:30.341551  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:30.341583  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:30.341595  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:30.421947  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:30.421985  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:30.461528  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:30.461567  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:30.512263  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:30.512303  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:33.027707  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:33.040554  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:33.040649  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:33.077023  673643 cri.go:89] found id: ""
	I0317 13:55:33.077062  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.077074  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:33.077082  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:33.077152  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:33.138663  673643 cri.go:89] found id: ""
	I0317 13:55:33.138701  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.138713  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:33.138721  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:33.138801  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:33.172226  673643 cri.go:89] found id: ""
	I0317 13:55:33.172255  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.172265  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:33.172274  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:33.172351  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:33.206274  673643 cri.go:89] found id: ""
	I0317 13:55:33.206315  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.206328  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:33.206336  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:33.206399  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:33.246268  673643 cri.go:89] found id: ""
	I0317 13:55:33.246297  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.246309  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:33.246316  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:33.246397  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:33.283412  673643 cri.go:89] found id: ""
	I0317 13:55:33.283443  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.283455  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:33.283465  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:33.283547  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:33.321276  673643 cri.go:89] found id: ""
	I0317 13:55:33.321310  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.321323  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:33.321332  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:33.321403  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:33.355843  673643 cri.go:89] found id: ""
	I0317 13:55:33.355873  673643 logs.go:282] 0 containers: []
	W0317 13:55:33.355885  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:33.355896  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:33.355917  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:33.452836  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:33.452877  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:33.500152  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:33.500194  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:33.550401  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:33.550441  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:33.566601  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:33.566634  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:33.633941  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:36.135655  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:36.147999  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:36.148088  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:36.179772  673643 cri.go:89] found id: ""
	I0317 13:55:36.179802  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.179814  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:36.179822  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:36.179888  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:36.210798  673643 cri.go:89] found id: ""
	I0317 13:55:36.210833  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.210845  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:36.210851  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:36.210912  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:36.244536  673643 cri.go:89] found id: ""
	I0317 13:55:36.244581  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.244595  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:36.244606  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:36.244670  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:36.276289  673643 cri.go:89] found id: ""
	I0317 13:55:36.276317  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.276328  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:36.276337  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:36.276401  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:36.310937  673643 cri.go:89] found id: ""
	I0317 13:55:36.310964  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.310974  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:36.310980  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:36.311038  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:36.349054  673643 cri.go:89] found id: ""
	I0317 13:55:36.349085  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.349096  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:36.349104  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:36.349172  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:36.383560  673643 cri.go:89] found id: ""
	I0317 13:55:36.383593  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.383606  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:36.383614  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:36.383695  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:36.416985  673643 cri.go:89] found id: ""
	I0317 13:55:36.417016  673643 logs.go:282] 0 containers: []
	W0317 13:55:36.417027  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:36.417039  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:36.417059  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:36.467098  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:36.467135  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:36.480235  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:36.480271  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:36.548453  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:36.548483  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:36.548500  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:36.622367  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:36.622412  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:39.163306  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:39.178124  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:39.178203  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:39.217992  673643 cri.go:89] found id: ""
	I0317 13:55:39.218026  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.218036  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:39.218042  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:39.218106  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:39.255691  673643 cri.go:89] found id: ""
	I0317 13:55:39.255721  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.255733  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:39.255741  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:39.255809  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:39.287255  673643 cri.go:89] found id: ""
	I0317 13:55:39.287287  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.287298  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:39.287306  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:39.287364  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:39.319460  673643 cri.go:89] found id: ""
	I0317 13:55:39.319489  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.319498  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:39.319504  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:39.319590  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:39.357203  673643 cri.go:89] found id: ""
	I0317 13:55:39.357236  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.357244  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:39.357251  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:39.357316  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:39.393251  673643 cri.go:89] found id: ""
	I0317 13:55:39.393286  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.393334  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:39.393344  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:39.393413  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:39.430342  673643 cri.go:89] found id: ""
	I0317 13:55:39.430375  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.430389  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:39.430395  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:39.430460  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:39.474724  673643 cri.go:89] found id: ""
	I0317 13:55:39.474757  673643 logs.go:282] 0 containers: []
	W0317 13:55:39.474768  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:39.474779  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:39.474794  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:39.489245  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:39.489301  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:39.568404  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:39.568431  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:39.568448  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:39.660439  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:39.660500  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:39.700767  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:39.700811  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:42.261653  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:42.274122  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:42.274205  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:42.307236  673643 cri.go:89] found id: ""
	I0317 13:55:42.307264  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.307273  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:42.307279  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:42.307330  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:42.342387  673643 cri.go:89] found id: ""
	I0317 13:55:42.342414  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.342422  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:42.342430  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:42.342493  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:42.376302  673643 cri.go:89] found id: ""
	I0317 13:55:42.376334  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.376347  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:42.376355  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:42.376430  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:42.413340  673643 cri.go:89] found id: ""
	I0317 13:55:42.413374  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.413386  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:42.413394  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:42.413454  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:42.444954  673643 cri.go:89] found id: ""
	I0317 13:55:42.444989  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.445002  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:42.445010  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:42.445078  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:42.486824  673643 cri.go:89] found id: ""
	I0317 13:55:42.486861  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.486873  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:42.486881  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:42.486946  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:42.519480  673643 cri.go:89] found id: ""
	I0317 13:55:42.519507  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.519515  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:42.519524  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:42.519593  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:42.557262  673643 cri.go:89] found id: ""
	I0317 13:55:42.557295  673643 logs.go:282] 0 containers: []
	W0317 13:55:42.557303  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:42.557316  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:42.557334  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:42.570649  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:42.570678  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:42.646019  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:42.646048  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:42.646060  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:42.728171  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:42.728226  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:42.768820  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:42.768853  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:45.327697  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:45.340847  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:45.340927  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:45.373239  673643 cri.go:89] found id: ""
	I0317 13:55:45.373277  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.373291  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:45.373302  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:45.373372  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:45.404524  673643 cri.go:89] found id: ""
	I0317 13:55:45.404563  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.404576  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:45.404585  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:45.404646  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:45.436294  673643 cri.go:89] found id: ""
	I0317 13:55:45.436322  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.436334  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:45.436342  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:45.436408  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:45.467931  673643 cri.go:89] found id: ""
	I0317 13:55:45.467962  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.467974  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:45.467982  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:45.468047  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:45.513114  673643 cri.go:89] found id: ""
	I0317 13:55:45.513144  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.513156  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:45.513164  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:45.513232  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:45.556232  673643 cri.go:89] found id: ""
	I0317 13:55:45.556266  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.556279  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:45.556287  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:45.556351  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:45.606926  673643 cri.go:89] found id: ""
	I0317 13:55:45.606961  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.606974  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:45.606982  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:45.607049  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:45.639619  673643 cri.go:89] found id: ""
	I0317 13:55:45.639652  673643 logs.go:282] 0 containers: []
	W0317 13:55:45.639664  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:45.639677  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:45.639692  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:45.697082  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:45.697118  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:45.709416  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:45.709444  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:45.774046  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:45.774073  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:45.774095  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:45.858024  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:45.858067  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:48.397028  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:48.411392  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:48.411475  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:48.455075  673643 cri.go:89] found id: ""
	I0317 13:55:48.455112  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.455125  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:48.455152  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:48.455235  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:48.493022  673643 cri.go:89] found id: ""
	I0317 13:55:48.493049  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.493057  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:48.493064  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:48.493130  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:48.542010  673643 cri.go:89] found id: ""
	I0317 13:55:48.542042  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.542053  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:48.542067  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:48.542131  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:48.583115  673643 cri.go:89] found id: ""
	I0317 13:55:48.583144  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.583156  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:48.583164  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:48.583226  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:48.619463  673643 cri.go:89] found id: ""
	I0317 13:55:48.619497  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.619509  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:48.619518  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:48.619618  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:48.653623  673643 cri.go:89] found id: ""
	I0317 13:55:48.653649  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.653657  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:48.653663  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:48.653725  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:48.699432  673643 cri.go:89] found id: ""
	I0317 13:55:48.699459  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.699471  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:48.699479  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:48.699562  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:48.742095  673643 cri.go:89] found id: ""
	I0317 13:55:48.742127  673643 logs.go:282] 0 containers: []
	W0317 13:55:48.742138  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:48.742154  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:48.742168  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:48.822684  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:48.822713  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:48.822727  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:48.929680  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:48.929727  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:48.971942  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:48.971984  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:49.042227  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:49.042265  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:51.558957  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:51.571237  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:51.571309  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:51.604367  673643 cri.go:89] found id: ""
	I0317 13:55:51.604398  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.604410  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:51.604419  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:51.604483  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:51.645698  673643 cri.go:89] found id: ""
	I0317 13:55:51.645727  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.645742  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:51.645750  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:51.645818  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:51.682753  673643 cri.go:89] found id: ""
	I0317 13:55:51.682777  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.682786  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:51.682792  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:51.682844  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:51.719457  673643 cri.go:89] found id: ""
	I0317 13:55:51.719494  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.719506  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:51.719514  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:51.719584  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:51.755357  673643 cri.go:89] found id: ""
	I0317 13:55:51.755386  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.755395  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:51.755405  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:51.755470  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:51.796522  673643 cri.go:89] found id: ""
	I0317 13:55:51.796558  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.796569  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:51.796575  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:51.796628  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:51.827335  673643 cri.go:89] found id: ""
	I0317 13:55:51.827368  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.827379  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:51.827387  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:51.827458  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:51.858839  673643 cri.go:89] found id: ""
	I0317 13:55:51.858877  673643 logs.go:282] 0 containers: []
	W0317 13:55:51.858890  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:51.858902  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:51.858921  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:51.909813  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:51.909849  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:51.922433  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:51.922471  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:52.002085  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:52.002114  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:52.002131  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:52.077285  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:52.077327  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:54.615676  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:54.628166  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:54.628251  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:54.662710  673643 cri.go:89] found id: ""
	I0317 13:55:54.662743  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.662755  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:54.662763  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:54.662828  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:54.696473  673643 cri.go:89] found id: ""
	I0317 13:55:54.696502  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.696512  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:54.696518  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:54.696574  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:54.727887  673643 cri.go:89] found id: ""
	I0317 13:55:54.727917  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.727926  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:54.727933  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:54.727998  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:54.759468  673643 cri.go:89] found id: ""
	I0317 13:55:54.759494  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.759502  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:54.759507  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:54.759580  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:54.791091  673643 cri.go:89] found id: ""
	I0317 13:55:54.791122  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.791133  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:54.791138  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:54.791191  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:54.823383  673643 cri.go:89] found id: ""
	I0317 13:55:54.823410  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.823418  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:54.823424  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:54.823475  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:54.853896  673643 cri.go:89] found id: ""
	I0317 13:55:54.853928  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.853938  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:54.853943  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:54.853993  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:54.887058  673643 cri.go:89] found id: ""
	I0317 13:55:54.887084  673643 logs.go:282] 0 containers: []
	W0317 13:55:54.887091  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:54.887102  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:54.887114  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:54.899478  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:54.899507  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:54.964869  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:54.964895  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:54.964908  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:55.042597  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:55.042641  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:55.083162  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:55.083203  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:55:57.635588  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:55:57.647337  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:55:57.647419  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:55:57.678144  673643 cri.go:89] found id: ""
	I0317 13:55:57.678171  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.678179  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:55:57.678186  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:55:57.678249  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:55:57.708537  673643 cri.go:89] found id: ""
	I0317 13:55:57.708566  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.708578  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:55:57.708585  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:55:57.708651  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:55:57.739131  673643 cri.go:89] found id: ""
	I0317 13:55:57.739164  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.739176  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:55:57.739184  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:55:57.739245  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:55:57.770490  673643 cri.go:89] found id: ""
	I0317 13:55:57.770519  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.770528  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:55:57.770534  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:55:57.770587  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:55:57.801882  673643 cri.go:89] found id: ""
	I0317 13:55:57.801914  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.801924  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:55:57.801930  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:55:57.801982  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:55:57.836243  673643 cri.go:89] found id: ""
	I0317 13:55:57.836269  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.836278  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:55:57.836284  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:55:57.836386  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:55:57.866004  673643 cri.go:89] found id: ""
	I0317 13:55:57.866032  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.866042  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:55:57.866049  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:55:57.866121  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:55:57.895597  673643 cri.go:89] found id: ""
	I0317 13:55:57.895624  673643 logs.go:282] 0 containers: []
	W0317 13:55:57.895633  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:55:57.895643  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:55:57.895660  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:55:57.907393  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:55:57.907420  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:55:57.967691  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:55:57.967717  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:55:57.967732  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:55:58.046532  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:55:58.046571  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:55:58.090664  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:55:58.090694  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:56:00.642029  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:56:00.655595  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:56:00.655672  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:56:00.695443  673643 cri.go:89] found id: ""
	I0317 13:56:00.695469  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.695476  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:56:00.695483  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:56:00.695556  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:56:00.734934  673643 cri.go:89] found id: ""
	I0317 13:56:00.734956  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.734963  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:56:00.734968  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:56:00.735020  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:56:00.781031  673643 cri.go:89] found id: ""
	I0317 13:56:00.781059  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.781070  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:56:00.781078  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:56:00.781132  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:56:00.821910  673643 cri.go:89] found id: ""
	I0317 13:56:00.821933  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.821941  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:56:00.821949  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:56:00.821998  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:56:00.858240  673643 cri.go:89] found id: ""
	I0317 13:56:00.858264  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.858272  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:56:00.858285  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:56:00.858347  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:56:00.901055  673643 cri.go:89] found id: ""
	I0317 13:56:00.901084  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.901096  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:56:00.901104  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:56:00.901162  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:56:00.935973  673643 cri.go:89] found id: ""
	I0317 13:56:00.936005  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.936016  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:56:00.936023  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:56:00.936082  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:56:00.976899  673643 cri.go:89] found id: ""
	I0317 13:56:00.976928  673643 logs.go:282] 0 containers: []
	W0317 13:56:00.976941  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:56:00.976953  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:56:00.976974  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:56:01.037214  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:56:01.037264  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:56:01.051941  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:56:01.051969  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:56:01.132959  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:56:01.132978  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:56:01.132992  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:56:01.226716  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:56:01.226767  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:56:03.765054  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:56:03.781792  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:56:03.781897  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:56:03.833416  673643 cri.go:89] found id: ""
	I0317 13:56:03.833450  673643 logs.go:282] 0 containers: []
	W0317 13:56:03.833465  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:56:03.833474  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:56:03.833553  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:56:03.883784  673643 cri.go:89] found id: ""
	I0317 13:56:03.883820  673643 logs.go:282] 0 containers: []
	W0317 13:56:03.883834  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:56:03.883843  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:56:03.883916  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:56:03.930698  673643 cri.go:89] found id: ""
	I0317 13:56:03.930728  673643 logs.go:282] 0 containers: []
	W0317 13:56:03.930739  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:56:03.930746  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:56:03.930807  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:56:03.970786  673643 cri.go:89] found id: ""
	I0317 13:56:03.970818  673643 logs.go:282] 0 containers: []
	W0317 13:56:03.970828  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:56:03.970836  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:56:03.970901  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:56:04.014742  673643 cri.go:89] found id: ""
	I0317 13:56:04.014768  673643 logs.go:282] 0 containers: []
	W0317 13:56:04.014778  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:56:04.014783  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:56:04.014848  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:56:04.067174  673643 cri.go:89] found id: ""
	I0317 13:56:04.067208  673643 logs.go:282] 0 containers: []
	W0317 13:56:04.067221  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:56:04.067228  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:56:04.067287  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:56:04.120963  673643 cri.go:89] found id: ""
	I0317 13:56:04.120994  673643 logs.go:282] 0 containers: []
	W0317 13:56:04.121007  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:56:04.121017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:56:04.121081  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:56:04.169655  673643 cri.go:89] found id: ""
	I0317 13:56:04.169693  673643 logs.go:282] 0 containers: []
	W0317 13:56:04.169707  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:56:04.169720  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:56:04.169739  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:56:04.262205  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:56:04.262232  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:56:04.262249  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:56:04.358542  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:56:04.358587  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:56:04.410920  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:56:04.410954  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:56:04.483092  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:56:04.483138  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:56:07.011510  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:56:07.024242  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:56:07.024323  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:56:07.062935  673643 cri.go:89] found id: ""
	I0317 13:56:07.062968  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.062981  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:56:07.062990  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:56:07.063062  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:56:07.098460  673643 cri.go:89] found id: ""
	I0317 13:56:07.098489  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.098508  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:56:07.098516  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:56:07.098583  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:56:07.133479  673643 cri.go:89] found id: ""
	I0317 13:56:07.133514  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.133526  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:56:07.133534  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:56:07.133606  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:56:07.171352  673643 cri.go:89] found id: ""
	I0317 13:56:07.171379  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.171390  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:56:07.171398  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:56:07.171455  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:56:07.215966  673643 cri.go:89] found id: ""
	I0317 13:56:07.215998  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.216010  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:56:07.216017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:56:07.216083  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:56:07.266190  673643 cri.go:89] found id: ""
	I0317 13:56:07.266218  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.266226  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:56:07.266232  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:56:07.266301  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:56:07.302977  673643 cri.go:89] found id: ""
	I0317 13:56:07.303016  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.303034  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:56:07.303042  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:56:07.303109  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:56:07.337005  673643 cri.go:89] found id: ""
	I0317 13:56:07.337040  673643 logs.go:282] 0 containers: []
	W0317 13:56:07.337051  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:56:07.337065  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:56:07.337084  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:56:07.427244  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:56:07.427271  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:56:07.427287  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:56:07.509416  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:56:07.509448  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:56:07.546081  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:56:07.546109  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:56:07.598218  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:56:07.598257  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:56:10.112391  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:56:10.125375  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 13:56:10.125465  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 13:56:10.158833  673643 cri.go:89] found id: ""
	I0317 13:56:10.158861  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.158870  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 13:56:10.158876  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 13:56:10.158940  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 13:56:10.191146  673643 cri.go:89] found id: ""
	I0317 13:56:10.191176  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.191184  673643 logs.go:284] No container was found matching "etcd"
	I0317 13:56:10.191190  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 13:56:10.191256  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 13:56:10.230495  673643 cri.go:89] found id: ""
	I0317 13:56:10.230522  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.230531  673643 logs.go:284] No container was found matching "coredns"
	I0317 13:56:10.230537  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 13:56:10.230591  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 13:56:10.261097  673643 cri.go:89] found id: ""
	I0317 13:56:10.261131  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.261142  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 13:56:10.261151  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 13:56:10.261216  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 13:56:10.296780  673643 cri.go:89] found id: ""
	I0317 13:56:10.296812  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.296825  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 13:56:10.296834  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 13:56:10.296893  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 13:56:10.330146  673643 cri.go:89] found id: ""
	I0317 13:56:10.330183  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.330196  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 13:56:10.330207  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 13:56:10.330297  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 13:56:10.361219  673643 cri.go:89] found id: ""
	I0317 13:56:10.361258  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.361270  673643 logs.go:284] No container was found matching "kindnet"
	I0317 13:56:10.361278  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 13:56:10.361342  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 13:56:10.393011  673643 cri.go:89] found id: ""
	I0317 13:56:10.393044  673643 logs.go:282] 0 containers: []
	W0317 13:56:10.393055  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 13:56:10.393067  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 13:56:10.393084  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0317 13:56:10.476923  673643 logs.go:123] Gathering logs for container status ...
	I0317 13:56:10.476971  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 13:56:10.512925  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 13:56:10.512957  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 13:56:10.576728  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 13:56:10.576771  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 13:56:10.591563  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 13:56:10.591601  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 13:56:10.657554  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 13:56:13.158274  673643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:56:13.175959  673643 kubeadm.go:597] duration metric: took 4m2.845543994s to restartPrimaryControlPlane
	W0317 13:56:13.176057  673643 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0317 13:56:13.176091  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0317 13:56:13.638294  673643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:56:13.656167  673643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:56:13.669932  673643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:56:13.682905  673643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:56:13.682931  673643 kubeadm.go:157] found existing configuration files:
	
	I0317 13:56:13.682998  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:56:13.694659  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:56:13.694733  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:56:13.707563  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:56:13.719643  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:56:13.719718  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:56:13.732009  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:56:13.743939  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:56:13.744016  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:56:13.755948  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:56:13.767363  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:56:13.767436  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:56:13.777045  673643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:56:13.851277  673643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 13:56:13.851366  673643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:56:14.001023  673643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:56:14.001232  673643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:56:14.001394  673643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 13:56:14.198037  673643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:56:14.199983  673643 out.go:235]   - Generating certificates and keys ...
	I0317 13:56:14.202624  673643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:56:14.202723  673643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:56:14.202833  673643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 13:56:14.202920  673643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 13:56:14.203019  673643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 13:56:14.203097  673643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 13:56:14.203186  673643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 13:56:14.203271  673643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 13:56:14.203375  673643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 13:56:14.203480  673643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 13:56:14.203547  673643 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 13:56:14.203630  673643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:56:14.948254  673643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:56:15.176994  673643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:56:15.429491  673643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:56:15.821238  673643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:56:15.841747  673643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:56:15.841927  673643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:56:15.842003  673643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:56:15.969600  673643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:56:15.971663  673643 out.go:235]   - Booting up control plane ...
	I0317 13:56:15.971808  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:56:15.973183  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:56:15.974221  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:56:15.974895  673643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:56:15.977030  673643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 13:56:55.978178  673643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 13:56:55.978349  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:56:55.978603  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:57:00.979503  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:57:00.979781  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:57:10.980291  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:57:10.980582  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:57:30.981390  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:57:30.981684  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:58:10.983776  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 13:58:10.984046  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 13:58:10.984059  673643 kubeadm.go:310] 
	I0317 13:58:10.984111  673643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 13:58:10.984162  673643 kubeadm.go:310] 		timed out waiting for the condition
	I0317 13:58:10.984168  673643 kubeadm.go:310] 
	I0317 13:58:10.984212  673643 kubeadm.go:310] 	This error is likely caused by:
	I0317 13:58:10.984255  673643 kubeadm.go:310] 		- The kubelet is not running
	I0317 13:58:10.984409  673643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 13:58:10.984417  673643 kubeadm.go:310] 
	I0317 13:58:10.984549  673643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 13:58:10.984590  673643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 13:58:10.984634  673643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 13:58:10.984640  673643 kubeadm.go:310] 
	I0317 13:58:10.984771  673643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 13:58:10.984881  673643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 13:58:10.984888  673643 kubeadm.go:310] 
	I0317 13:58:10.985029  673643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 13:58:10.985151  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 13:58:10.985255  673643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 13:58:10.985355  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 13:58:10.985363  673643 kubeadm.go:310] 
	I0317 13:58:10.985722  673643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:58:10.985839  673643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 13:58:10.985926  673643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0317 13:58:10.986064  673643 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0317 13:58:10.986115  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0317 13:58:11.488193  673643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:58:11.509123  673643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:58:11.524642  673643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:58:11.524661  673643 kubeadm.go:157] found existing configuration files:
	
	I0317 13:58:11.524698  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:58:11.536779  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:58:11.536839  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:58:11.549414  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:58:11.562317  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:58:11.562384  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:58:11.575160  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:58:11.586657  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:58:11.586729  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:58:11.598526  673643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:58:11.606888  673643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:58:11.606944  673643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:58:11.615792  673643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:58:11.869503  673643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 14:00:08.100227  673643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 14:00:08.100326  673643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 14:00:08.101702  673643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 14:00:08.101771  673643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 14:00:08.101843  673643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 14:00:08.101949  673643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 14:00:08.102103  673643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 14:00:08.102213  673643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 14:00:08.103900  673643 out.go:235]   - Generating certificates and keys ...
	I0317 14:00:08.103990  673643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 14:00:08.104047  673643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 14:00:08.104124  673643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 14:00:08.104200  673643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 14:00:08.104303  673643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 14:00:08.104384  673643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 14:00:08.104471  673643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 14:00:08.104558  673643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 14:00:08.104655  673643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 14:00:08.104750  673643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 14:00:08.104799  673643 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 14:00:08.104865  673643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 14:00:08.104953  673643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 14:00:08.105028  673643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 14:00:08.105106  673643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 14:00:08.105200  673643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 14:00:08.105374  673643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 14:00:08.105449  673643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 14:00:08.105497  673643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 14:00:08.105613  673643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 14:00:08.107093  673643 out.go:235]   - Booting up control plane ...
	I0317 14:00:08.107203  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 14:00:08.107321  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 14:00:08.107412  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 14:00:08.107544  673643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 14:00:08.107730  673643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 14:00:08.107811  673643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 14:00:08.107903  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108136  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108241  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108504  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108614  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108874  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108968  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109174  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109230  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109440  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109465  673643 kubeadm.go:310] 
	I0317 14:00:08.109515  673643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 14:00:08.109565  673643 kubeadm.go:310] 		timed out waiting for the condition
	I0317 14:00:08.109575  673643 kubeadm.go:310] 
	I0317 14:00:08.109617  673643 kubeadm.go:310] 	This error is likely caused by:
	I0317 14:00:08.109657  673643 kubeadm.go:310] 		- The kubelet is not running
	I0317 14:00:08.109782  673643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 14:00:08.109799  673643 kubeadm.go:310] 
	I0317 14:00:08.109930  673643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 14:00:08.109984  673643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 14:00:08.110027  673643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 14:00:08.110035  673643 kubeadm.go:310] 
	I0317 14:00:08.110118  673643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 14:00:08.110184  673643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 14:00:08.110190  673643 kubeadm.go:310] 
	I0317 14:00:08.110328  673643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 14:00:08.110435  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 14:00:08.110496  673643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 14:00:08.110562  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 14:00:08.110602  673643 kubeadm.go:310] 
	I0317 14:00:08.110625  673643 kubeadm.go:394] duration metric: took 7m57.828587617s to StartCluster
	I0317 14:00:08.110682  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 14:00:08.110737  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 14:00:08.142741  673643 cri.go:89] found id: ""
	I0317 14:00:08.142781  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.142795  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 14:00:08.142804  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 14:00:08.142877  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 14:00:08.174753  673643 cri.go:89] found id: ""
	I0317 14:00:08.174784  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.174796  673643 logs.go:284] No container was found matching "etcd"
	I0317 14:00:08.174804  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 14:00:08.174859  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 14:00:08.204965  673643 cri.go:89] found id: ""
	I0317 14:00:08.204997  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.205009  673643 logs.go:284] No container was found matching "coredns"
	I0317 14:00:08.205017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 14:00:08.205081  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 14:00:08.235717  673643 cri.go:89] found id: ""
	I0317 14:00:08.235749  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.235757  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 14:00:08.235767  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 14:00:08.235833  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 14:00:08.265585  673643 cri.go:89] found id: ""
	I0317 14:00:08.265613  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.265623  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 14:00:08.265631  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 14:00:08.265718  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 14:00:08.295600  673643 cri.go:89] found id: ""
	I0317 14:00:08.295629  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.295641  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 14:00:08.295648  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 14:00:08.295713  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 14:00:08.327749  673643 cri.go:89] found id: ""
	I0317 14:00:08.327778  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.327787  673643 logs.go:284] No container was found matching "kindnet"
	I0317 14:00:08.327794  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 14:00:08.327855  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 14:00:08.359913  673643 cri.go:89] found id: ""
	I0317 14:00:08.359944  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.359952  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 14:00:08.359962  673643 logs.go:123] Gathering logs for container status ...
	I0317 14:00:08.359975  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 14:00:08.396929  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 14:00:08.396959  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 14:00:08.451498  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 14:00:08.451556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 14:00:08.464742  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 14:00:08.464771  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 14:00:08.537703  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 14:00:08.537733  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 14:00:08.537749  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0317 14:00:08.658936  673643 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 14:00:08.659006  673643 out.go:270] * 
	* 
	W0317 14:00:08.659061  673643 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.659074  673643 out.go:270] * 
	* 
	W0317 14:00:08.659944  673643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 14:00:08.663521  673643 out.go:201] 
	W0317 14:00:08.664750  673643 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.664794  673643 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 14:00:08.664812  673643 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 14:00:08.666351  673643 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-803027 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (241.527947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-803027 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750 sudo cat                | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750 sudo cat                | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750 sudo cat                | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-788750                         | enable-default-cni-788750 | jenkins | v1.35.0 | 17 Mar 25 13:59 UTC | 17 Mar 25 13:59 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:59:14
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:59:14.981692  684423 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:59:14.981852  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.981867  684423 out.go:358] Setting ErrFile to fd 2...
	I0317 13:59:14.981874  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.982141  684423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:59:14.982809  684423 out.go:352] Setting JSON to false
	I0317 13:59:14.984111  684423 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13299,"bootTime":1742206656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:59:14.984207  684423 start.go:139] virtualization: kvm guest
	I0317 13:59:14.986343  684423 out.go:177] * [bridge-788750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:59:14.987706  684423 notify.go:220] Checking for updates...
	I0317 13:59:14.987715  684423 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:59:14.989330  684423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:59:14.990916  684423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:14.992287  684423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:14.993610  684423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:59:14.995116  684423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:59:14.997007  684423 config.go:182] Loaded profile config "enable-default-cni-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997099  684423 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997181  684423 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:59:14.997265  684423 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:59:15.035775  684423 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:59:14.820648  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821374  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has current primary IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821404  682662 main.go:141] libmachine: (flannel-788750) found domain IP: 192.168.72.30
	I0317 13:59:14.821418  682662 main.go:141] libmachine: (flannel-788750) reserving static IP address...
	I0317 13:59:14.821957  682662 main.go:141] libmachine: (flannel-788750) DBG | unable to find host DHCP lease matching {name: "flannel-788750", mac: "52:54:00:55:e8:19", ip: "192.168.72.30"} in network mk-flannel-788750
	I0317 13:59:14.906769  682662 main.go:141] libmachine: (flannel-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:14.906805  682662 main.go:141] libmachine: (flannel-788750) reserved static IP address 192.168.72.30 for domain flannel-788750
	I0317 13:59:14.906819  682662 main.go:141] libmachine: (flannel-788750) waiting for SSH...
	I0317 13:59:14.909743  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910088  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:14.910120  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910299  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH client type: external
	I0317 13:59:14.910327  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa (-rw-------)
	I0317 13:59:14.910360  682662 main.go:141] libmachine: (flannel-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:14.910373  682662 main.go:141] libmachine: (flannel-788750) DBG | About to run SSH command:
	I0317 13:59:14.910388  682662 main.go:141] libmachine: (flannel-788750) DBG | exit 0
	I0317 13:59:15.039803  682662 main.go:141] libmachine: (flannel-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:15.040031  682662 main.go:141] libmachine: (flannel-788750) KVM machine creation complete
	I0317 13:59:15.040330  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:15.040923  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041146  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041319  682662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:15.041338  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:15.037243  684423 start.go:297] selected driver: kvm2
	I0317 13:59:15.037267  684423 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:59:15.037287  684423 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:59:15.038541  684423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.038644  684423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:59:15.057562  684423 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:59:15.057627  684423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:59:15.057863  684423 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:59:15.057895  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:15.057901  684423 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:59:15.057945  684423 start.go:340] cluster config:
	{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:15.058020  684423 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.059766  684423 out.go:177] * Starting "bridge-788750" primary control-plane node in "bridge-788750" cluster
	I0317 13:59:15.061061  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:15.061110  684423 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:59:15.061133  684423 cache.go:56] Caching tarball of preloaded images
	I0317 13:59:15.061226  684423 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:59:15.061242  684423 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:59:15.061359  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:15.061391  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json: {Name:mkeb86f621957feb90cebae88f4bfc025146aa69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:15.061584  684423 start.go:360] acquireMachinesLock for bridge-788750: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:59:16.236468  684423 start.go:364] duration metric: took 1.174838631s to acquireMachinesLock for "bridge-788750"
	I0317 13:59:16.236553  684423 start.go:93] Provisioning new machine with config: &{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:16.236662  684423 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:59:15.042960  682662 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:15.042977  682662 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:15.042984  682662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:15.042994  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.046053  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046440  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.046460  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046654  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.047369  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047564  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047723  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.047905  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.048115  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.048125  682662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:15.155136  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.155161  682662 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:15.155171  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.157989  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158314  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.158344  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158604  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.158819  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.158982  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.159164  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.159287  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.159569  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.159584  682662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:15.263937  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:15.264006  682662 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:15.264013  682662 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:15.264021  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264323  682662 buildroot.go:166] provisioning hostname "flannel-788750"
	I0317 13:59:15.264358  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264595  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.267397  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.267894  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.267919  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.268123  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.268363  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268540  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268702  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.268870  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.269106  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.269121  682662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-788750 && echo "flannel-788750" | sudo tee /etc/hostname
	I0317 13:59:15.393761  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-788750
	
	I0317 13:59:15.393795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.396701  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397053  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.397079  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397315  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.397527  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397685  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397812  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.397956  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.398219  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.398235  682662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:15.512038  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.512072  682662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:15.512093  682662 buildroot.go:174] setting up certificates
	I0317 13:59:15.512102  682662 provision.go:84] configureAuth start
	I0317 13:59:15.512110  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.512392  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:15.515143  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515466  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.515492  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515711  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.517703  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.517986  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.518013  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.518136  682662 provision.go:143] copyHostCerts
	I0317 13:59:15.518194  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:15.518211  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:15.518281  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:15.518370  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:15.518378  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:15.518401  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:15.518451  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:15.518459  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:15.518487  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:15.518537  682662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.flannel-788750 san=[127.0.0.1 192.168.72.30 flannel-788750 localhost minikube]
	I0317 13:59:15.606367  682662 provision.go:177] copyRemoteCerts
	I0317 13:59:15.606436  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:15.606478  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.608965  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609288  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.609320  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609467  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.609677  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.609868  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.610035  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:15.692959  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:15.715060  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0317 13:59:15.736168  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:15.757347  682662 provision.go:87] duration metric: took 245.231065ms to configureAuth
	I0317 13:59:15.757375  682662 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:15.757523  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:15.757599  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.760083  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760447  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.760473  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760703  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.760886  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761040  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761189  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.761364  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.761619  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.761640  682662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:15.989797  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:15.989831  682662 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:15.989841  682662 main.go:141] libmachine: (flannel-788750) Calling .GetURL
	I0317 13:59:15.991175  682662 main.go:141] libmachine: (flannel-788750) DBG | using libvirt version 6000000
	I0317 13:59:15.993619  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.993970  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.993998  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.994173  682662 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:15.994189  682662 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:15.994198  682662 client.go:171] duration metric: took 25.832600711s to LocalClient.Create
	I0317 13:59:15.994227  682662 start.go:167] duration metric: took 25.832673652s to libmachine.API.Create "flannel-788750"
	I0317 13:59:15.994239  682662 start.go:293] postStartSetup for "flannel-788750" (driver="kvm2")
	I0317 13:59:15.994255  682662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:15.994280  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.994552  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:15.994591  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.996836  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997188  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.997218  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997354  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.997523  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.997708  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.997830  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.082655  682662 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:16.086465  682662 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:16.086500  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:16.086557  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:16.086623  682662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:16.086707  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:16.096327  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:16.120987  682662 start.go:296] duration metric: took 126.730504ms for postStartSetup
	I0317 13:59:16.121051  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:16.121795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.124252  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124669  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.124695  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124960  682662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/config.json ...
	I0317 13:59:16.125174  682662 start.go:128] duration metric: took 25.986754439s to createHost
	I0317 13:59:16.125209  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.127973  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128376  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.128405  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128538  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.128709  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.128874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.129023  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.129206  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:16.129486  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:16.129501  682662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:16.236319  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219956.214556978
	
	I0317 13:59:16.236345  682662 fix.go:216] guest clock: 1742219956.214556978
	I0317 13:59:16.236353  682662 fix.go:229] Guest: 2025-03-17 13:59:16.214556978 +0000 UTC Remote: 2025-03-17 13:59:16.125191891 +0000 UTC m=+26.132597802 (delta=89.365087ms)
	I0317 13:59:16.236374  682662 fix.go:200] guest clock delta is within tolerance: 89.365087ms
	I0317 13:59:16.236379  682662 start.go:83] releasing machines lock for "flannel-788750", held for 26.098086792s
	I0317 13:59:16.236406  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.236717  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.240150  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.242931  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.242954  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.243184  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.243857  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247621  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247686  682662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:16.247747  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.247854  682662 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:16.247879  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.251119  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251267  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251402  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251424  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251567  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251590  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251600  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251792  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.251958  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.252029  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252213  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.252268  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252413  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.336552  682662 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:16.372394  682662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:16.543479  682662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:16.549196  682662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:16.549278  682662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:16.567894  682662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:16.567923  682662 start.go:495] detecting cgroup driver to use...
	I0317 13:59:16.568007  682662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:16.591718  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:16.606627  682662 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:16.606699  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:16.620043  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:16.635200  682662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:16.752393  682662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:16.899089  682662 docker.go:233] disabling docker service ...
	I0317 13:59:16.899148  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:16.914164  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:16.928117  682662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:17.053498  682662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:17.189186  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:17.203833  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:17.223316  682662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:17.223397  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.233530  682662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:17.233601  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.243490  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.253607  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.263744  682662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:17.274183  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.287378  682662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.303360  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.313576  682662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:17.322490  682662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:17.322555  682662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:17.336395  682662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:17.345254  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:17.458590  682662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:17.543773  682662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:17.543842  682662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:17.548368  682662 start.go:563] Will wait 60s for crictl version
	I0317 13:59:17.548436  682662 ssh_runner.go:195] Run: which crictl
	I0317 13:59:17.552779  682662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:17.595329  682662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:17.595419  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.621136  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.650209  682662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:16.239781  684423 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0317 13:59:16.239987  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:16.240028  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:16.260585  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0317 13:59:16.261043  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:16.261626  684423 main.go:141] libmachine: Using API Version  1
	I0317 13:59:16.261650  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:16.262203  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:16.262429  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:16.262618  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:16.262805  684423 start.go:159] libmachine.API.Create for "bridge-788750" (driver="kvm2")
	I0317 13:59:16.262832  684423 client.go:168] LocalClient.Create starting
	I0317 13:59:16.262873  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:59:16.262914  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.262936  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263026  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:59:16.263055  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.263627  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263688  684423 main.go:141] libmachine: Running pre-create checks...
	I0317 13:59:16.263699  684423 main.go:141] libmachine: (bridge-788750) Calling .PreCreateCheck
	I0317 13:59:16.265317  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:16.266685  684423 main.go:141] libmachine: Creating machine...
	I0317 13:59:16.266703  684423 main.go:141] libmachine: (bridge-788750) Calling .Create
	I0317 13:59:16.266873  684423 main.go:141] libmachine: (bridge-788750) creating KVM machine...
	I0317 13:59:16.266894  684423 main.go:141] libmachine: (bridge-788750) creating network...
	I0317 13:59:16.268321  684423 main.go:141] libmachine: (bridge-788750) DBG | found existing default KVM network
	I0317 13:59:16.270323  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.270123  684478 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013de0}
	I0317 13:59:16.270347  684423 main.go:141] libmachine: (bridge-788750) DBG | created network xml: 
	I0317 13:59:16.270356  684423 main.go:141] libmachine: (bridge-788750) DBG | <network>
	I0317 13:59:16.270365  684423 main.go:141] libmachine: (bridge-788750) DBG |   <name>mk-bridge-788750</name>
	I0317 13:59:16.270372  684423 main.go:141] libmachine: (bridge-788750) DBG |   <dns enable='no'/>
	I0317 13:59:16.270379  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270388  684423 main.go:141] libmachine: (bridge-788750) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0317 13:59:16.270396  684423 main.go:141] libmachine: (bridge-788750) DBG |     <dhcp>
	I0317 13:59:16.270404  684423 main.go:141] libmachine: (bridge-788750) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0317 13:59:16.270412  684423 main.go:141] libmachine: (bridge-788750) DBG |     </dhcp>
	I0317 13:59:16.270418  684423 main.go:141] libmachine: (bridge-788750) DBG |   </ip>
	I0317 13:59:16.270426  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270432  684423 main.go:141] libmachine: (bridge-788750) DBG | </network>
	I0317 13:59:16.270440  684423 main.go:141] libmachine: (bridge-788750) DBG | 
	I0317 13:59:16.276393  684423 main.go:141] libmachine: (bridge-788750) DBG | trying to create private KVM network mk-bridge-788750 192.168.39.0/24...
	I0317 13:59:16.361973  684423 main.go:141] libmachine: (bridge-788750) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.362009  684423 main.go:141] libmachine: (bridge-788750) DBG | private KVM network mk-bridge-788750 192.168.39.0/24 created
	I0317 13:59:16.362022  684423 main.go:141] libmachine: (bridge-788750) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:59:16.362044  684423 main.go:141] libmachine: (bridge-788750) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:59:16.362105  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.359405  684478 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.657775  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.657652  684478 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa...
	I0317 13:59:16.896870  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896712  684478 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk...
	I0317 13:59:16.896904  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing magic tar header
	I0317 13:59:16.896919  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing SSH key tar header
	I0317 13:59:16.896931  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896829  684478 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.896949  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750
	I0317 13:59:16.896963  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 (perms=drwx------)
	I0317 13:59:16.896975  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:59:16.896989  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.897000  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:59:16.897011  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:59:16.897027  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:59:16.897040  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:59:16.897049  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:59:16.897059  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins
	I0317 13:59:16.897070  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home
	I0317 13:59:16.897081  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:59:16.897102  684423 main.go:141] libmachine: (bridge-788750) DBG | skipping /home - not owner
	I0317 13:59:16.897114  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:59:16.897128  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:16.898240  684423 main.go:141] libmachine: (bridge-788750) define libvirt domain using xml: 
	I0317 13:59:16.898267  684423 main.go:141] libmachine: (bridge-788750) <domain type='kvm'>
	I0317 13:59:16.898276  684423 main.go:141] libmachine: (bridge-788750)   <name>bridge-788750</name>
	I0317 13:59:16.898286  684423 main.go:141] libmachine: (bridge-788750)   <memory unit='MiB'>3072</memory>
	I0317 13:59:16.898328  684423 main.go:141] libmachine: (bridge-788750)   <vcpu>2</vcpu>
	I0317 13:59:16.898361  684423 main.go:141] libmachine: (bridge-788750)   <features>
	I0317 13:59:16.898371  684423 main.go:141] libmachine: (bridge-788750)     <acpi/>
	I0317 13:59:16.898379  684423 main.go:141] libmachine: (bridge-788750)     <apic/>
	I0317 13:59:16.898391  684423 main.go:141] libmachine: (bridge-788750)     <pae/>
	I0317 13:59:16.898401  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898409  684423 main.go:141] libmachine: (bridge-788750)   </features>
	I0317 13:59:16.898419  684423 main.go:141] libmachine: (bridge-788750)   <cpu mode='host-passthrough'>
	I0317 13:59:16.898428  684423 main.go:141] libmachine: (bridge-788750)   
	I0317 13:59:16.898436  684423 main.go:141] libmachine: (bridge-788750)   </cpu>
	I0317 13:59:16.898444  684423 main.go:141] libmachine: (bridge-788750)   <os>
	I0317 13:59:16.898452  684423 main.go:141] libmachine: (bridge-788750)     <type>hvm</type>
	I0317 13:59:16.898460  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='cdrom'/>
	I0317 13:59:16.898470  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='hd'/>
	I0317 13:59:16.898477  684423 main.go:141] libmachine: (bridge-788750)     <bootmenu enable='no'/>
	I0317 13:59:16.898485  684423 main.go:141] libmachine: (bridge-788750)   </os>
	I0317 13:59:16.898492  684423 main.go:141] libmachine: (bridge-788750)   <devices>
	I0317 13:59:16.898506  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='cdrom'>
	I0317 13:59:16.898519  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/boot2docker.iso'/>
	I0317 13:59:16.898532  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hdc' bus='scsi'/>
	I0317 13:59:16.898550  684423 main.go:141] libmachine: (bridge-788750)       <readonly/>
	I0317 13:59:16.898559  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898568  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='disk'>
	I0317 13:59:16.898584  684423 main.go:141] libmachine: (bridge-788750)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:59:16.898615  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk'/>
	I0317 13:59:16.898627  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hda' bus='virtio'/>
	I0317 13:59:16.898636  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898646  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898676  684423 main.go:141] libmachine: (bridge-788750)       <source network='mk-bridge-788750'/>
	I0317 13:59:16.898700  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898709  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898721  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898738  684423 main.go:141] libmachine: (bridge-788750)       <source network='default'/>
	I0317 13:59:16.898748  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898763  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898776  684423 main.go:141] libmachine: (bridge-788750)     <serial type='pty'>
	I0317 13:59:16.898787  684423 main.go:141] libmachine: (bridge-788750)       <target port='0'/>
	I0317 13:59:16.898794  684423 main.go:141] libmachine: (bridge-788750)     </serial>
	I0317 13:59:16.898802  684423 main.go:141] libmachine: (bridge-788750)     <console type='pty'>
	I0317 13:59:16.898813  684423 main.go:141] libmachine: (bridge-788750)       <target type='serial' port='0'/>
	I0317 13:59:16.898819  684423 main.go:141] libmachine: (bridge-788750)     </console>
	I0317 13:59:16.898831  684423 main.go:141] libmachine: (bridge-788750)     <rng model='virtio'>
	I0317 13:59:16.898839  684423 main.go:141] libmachine: (bridge-788750)       <backend model='random'>/dev/random</backend>
	I0317 13:59:16.898851  684423 main.go:141] libmachine: (bridge-788750)     </rng>
	I0317 13:59:16.898874  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898906  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898924  684423 main.go:141] libmachine: (bridge-788750)   </devices>
	I0317 13:59:16.898943  684423 main.go:141] libmachine: (bridge-788750) </domain>
	I0317 13:59:16.898963  684423 main.go:141] libmachine: (bridge-788750) 
	I0317 13:59:16.903437  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:d3:5c:cd in network default
	I0317 13:59:16.904002  684423 main.go:141] libmachine: (bridge-788750) starting domain...
	I0317 13:59:16.904026  684423 main.go:141] libmachine: (bridge-788750) ensuring networks are active...
	I0317 13:59:16.904037  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:16.904754  684423 main.go:141] libmachine: (bridge-788750) Ensuring network default is active
	I0317 13:59:16.905086  684423 main.go:141] libmachine: (bridge-788750) Ensuring network mk-bridge-788750 is active
	I0317 13:59:16.905562  684423 main.go:141] libmachine: (bridge-788750) getting domain XML...
	I0317 13:59:16.906187  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:18.327351  684423 main.go:141] libmachine: (bridge-788750) waiting for IP...
	I0317 13:59:18.328411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.328897  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.328988  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.328892  684478 retry.go:31] will retry after 281.911181ms: waiting for domain to come up
	I0317 13:59:18.613012  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.613673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.613705  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.613640  684478 retry.go:31] will retry after 285.120088ms: waiting for domain to come up
	I0317 13:59:18.900301  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.900985  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.901010  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.900958  684478 retry.go:31] will retry after 300.755427ms: waiting for domain to come up
	I0317 13:59:19.203685  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.204433  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.204487  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.204404  684478 retry.go:31] will retry after 482.495453ms: waiting for domain to come up
	I0317 13:59:19.688081  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.688673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.688704  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.688626  684478 retry.go:31] will retry after 726.121432ms: waiting for domain to come up
	I0317 13:59:17.651513  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:17.654706  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655140  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:17.655175  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655433  682662 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:17.659262  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:17.670748  682662 kubeadm.go:883] updating cluster {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:17.670853  682662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:17.670896  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:17.702512  682662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:17.702583  682662 ssh_runner.go:195] Run: which lz4
	I0317 13:59:17.706362  682662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:17.710341  682662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:17.710372  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:19.011475  682662 crio.go:462] duration metric: took 1.305154533s to copy over tarball
	I0317 13:59:19.011575  682662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:21.330326  682662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.318697484s)
	I0317 13:59:21.330366  682662 crio.go:469] duration metric: took 2.318859908s to extract the tarball
	I0317 13:59:21.330377  682662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:21.368396  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:21.409403  682662 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:21.409435  682662 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:21.409446  682662 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.32.2 crio true true} ...
	I0317 13:59:21.409567  682662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0317 13:59:21.409728  682662 ssh_runner.go:195] Run: crio config
	I0317 13:59:21.461149  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:21.461173  682662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:21.461196  682662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-788750 NodeName:flannel-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:21.461312  682662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:21.461375  682662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:21.471315  682662 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:21.471401  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:21.480637  682662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:21.497818  682662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:21.514202  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0317 13:59:21.531846  682662 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:21.535852  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:21.547918  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:21.686995  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:21.707033  682662 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750 for IP: 192.168.72.30
	I0317 13:59:21.707066  682662 certs.go:194] generating shared ca certs ...
	I0317 13:59:21.707100  682662 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.707315  682662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:21.707394  682662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:21.707412  682662 certs.go:256] generating profile certs ...
	I0317 13:59:21.707485  682662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key
	I0317 13:59:21.707504  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt with IP's: []
	I0317 13:59:21.991318  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt ...
	I0317 13:59:21.991349  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: {Name:mk98eed9ca2b5d327d7f4f5299f99a2ef0fd27b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991510  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key ...
	I0317 13:59:21.991521  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key: {Name:mkb9d21292c13affabb06e343bb09c1a56eddefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991629  682662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262
	I0317 13:59:21.991650  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.30]
	I0317 13:59:22.386930  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 ...
	I0317 13:59:22.386968  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262: {Name:mk5b32d7f691721ce84195f520653f84677487de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387146  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 ...
	I0317 13:59:22.387165  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262: {Name:mk49fe6f886e1c3fa3806fbf01bfe3f58ce4f93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387271  682662 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt
	I0317 13:59:22.387368  682662 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key
	I0317 13:59:22.387444  682662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key
	I0317 13:59:22.387468  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt with IP's: []
	I0317 13:59:23.150969  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt ...
	I0317 13:59:23.151001  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt: {Name:mke13a7275b9ea4a183b0de420ac1690d8c1d05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151192  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key ...
	I0317 13:59:23.151219  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key: {Name:mk045a87c5e6145ebe19bfd7ec6b3783a3d14258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151427  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:23.151466  682662 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:23.151476  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:23.151497  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:23.151524  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:23.151569  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:23.151609  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:23.152112  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:23.175083  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:23.197285  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:23.225143  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:23.247634  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:23.272656  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:23.306874  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:23.338116  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:23.368010  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:23.394314  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:23.423219  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:23.446176  682662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:23.463202  682662 ssh_runner.go:195] Run: openssl version
	I0317 13:59:23.469202  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:23.482769  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487375  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487441  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.493591  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:23.505891  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:23.518142  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523622  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523688  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.529371  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:23.542335  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:23.553110  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557761  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557819  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.563002  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:23.572975  682662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:23.576788  682662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:23.576843  682662 kubeadm.go:392] StartCluster: {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:23.576909  682662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:23.576950  682662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:23.620638  682662 cri.go:89] found id: ""
	I0317 13:59:23.620723  682662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:23.630796  682662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:23.641676  682662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:23.651990  682662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:23.652016  682662 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:23.652066  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:23.662175  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:23.662253  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:23.673817  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:23.683465  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:23.683547  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:23.695127  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.708536  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:23.708603  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.720370  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:23.729693  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:23.729756  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:23.741346  682662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:23.795773  682662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:23.795912  682662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:23.888818  682662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:23.889041  682662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:23.889172  682662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:23.902164  682662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:20.416846  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.417388  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.417472  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.417375  684478 retry.go:31] will retry after 578.975886ms: waiting for domain to come up
	I0317 13:59:20.998084  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.998743  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.998773  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.998683  684478 retry.go:31] will retry after 1.168593486s: waiting for domain to come up
	I0317 13:59:22.168602  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:22.169205  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:22.169302  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:22.169195  684478 retry.go:31] will retry after 915.875846ms: waiting for domain to come up
	I0317 13:59:23.086435  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:23.086889  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:23.086917  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:23.086855  684478 retry.go:31] will retry after 1.782289012s: waiting for domain to come up
	I0317 13:59:24.872807  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:24.873338  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:24.873403  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:24.873330  684478 retry.go:31] will retry after 2.082516204s: waiting for domain to come up
	I0317 13:59:24.044110  682662 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:24.044245  682662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:24.044341  682662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:24.044450  682662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:24.201837  682662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:24.546018  682662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:24.644028  682662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:24.791251  682662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:24.791629  682662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.148014  682662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:25.148303  682662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.299352  682662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:25.535177  682662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:25.769563  682662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:25.769811  682662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:25.913584  682662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:26.217258  682662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:26.606599  682662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:26.749144  682662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:26.904044  682662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:26.904808  682662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:26.907777  682662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:26.958055  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:26.958771  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:26.958797  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:26.958687  684478 retry.go:31] will retry after 1.918434497s: waiting for domain to come up
	I0317 13:59:28.884652  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:28.884965  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:28.885027  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:28.884968  684478 retry.go:31] will retry after 2.779630313s: waiting for domain to come up
	I0317 13:59:26.909655  682662 out.go:235]   - Booting up control plane ...
	I0317 13:59:26.909809  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:26.909938  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:26.910666  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:26.932841  682662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:26.939466  682662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:26.939639  682662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:27.099640  682662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:27.099824  682662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:28.099988  682662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001156397s
	I0317 13:59:28.100085  682662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:33.100467  682662 kubeadm.go:310] [api-check] The API server is healthy after 5.001005071s
	I0317 13:59:33.110998  682662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:33.130476  682662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:33.152222  682662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:33.152448  682662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:33.165204  682662 kubeadm.go:310] [bootstrap-token] Using token: wul87d.x4r8hdwyi1r15k1o
	I0317 13:59:33.166488  682662 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:33.166623  682662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:33.171916  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:33.180293  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:33.183680  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:33.187126  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:33.198883  682662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:33.509086  682662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:33.946270  682662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:34.504578  682662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:34.505419  682662 kubeadm.go:310] 
	I0317 13:59:34.505481  682662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:34.505490  682662 kubeadm.go:310] 
	I0317 13:59:34.505565  682662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:34.505572  682662 kubeadm.go:310] 
	I0317 13:59:34.505592  682662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:34.505640  682662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:34.505688  682662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:34.505694  682662 kubeadm.go:310] 
	I0317 13:59:34.505736  682662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:34.505761  682662 kubeadm.go:310] 
	I0317 13:59:34.505821  682662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:34.505829  682662 kubeadm.go:310] 
	I0317 13:59:34.505873  682662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:34.505941  682662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:34.506011  682662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:34.506018  682662 kubeadm.go:310] 
	I0317 13:59:34.506093  682662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:34.506173  682662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:34.506184  682662 kubeadm.go:310] 
	I0317 13:59:34.506253  682662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506349  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:34.506371  682662 kubeadm.go:310] 	--control-plane 
	I0317 13:59:34.506377  682662 kubeadm.go:310] 
	I0317 13:59:34.506447  682662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:34.506453  682662 kubeadm.go:310] 
	I0317 13:59:34.506520  682662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506607  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:34.507486  682662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:34.507585  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:34.509544  682662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0317 13:59:31.666320  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:31.666834  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:31.666882  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:31.666805  684478 retry.go:31] will retry after 4.169301354s: waiting for domain to come up
	I0317 13:59:34.510695  682662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 13:59:34.516421  682662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 13:59:34.516443  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0317 13:59:34.533714  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 13:59:34.893048  682662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:34.893132  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:34.893146  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=flannel-788750 minikube.k8s.io/primary=true
	I0317 13:59:34.943374  682662 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:35.063217  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:35.563994  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.063612  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.563720  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.063942  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.564320  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.648519  682662 kubeadm.go:1113] duration metric: took 2.75546083s to wait for elevateKubeSystemPrivileges
	I0317 13:59:37.648579  682662 kubeadm.go:394] duration metric: took 14.071739112s to StartCluster
	I0317 13:59:37.648605  682662 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.648696  682662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:37.649597  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.649855  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:59:37.649879  682662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:37.649947  682662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:59:37.650030  682662 addons.go:69] Setting storage-provisioner=true in profile "flannel-788750"
	I0317 13:59:37.650048  682662 addons.go:238] Setting addon storage-provisioner=true in "flannel-788750"
	I0317 13:59:37.650053  682662 addons.go:69] Setting default-storageclass=true in profile "flannel-788750"
	I0317 13:59:37.650079  682662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-788750"
	I0317 13:59:37.650086  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.650103  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:37.650523  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650547  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650577  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.650701  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.651428  682662 out.go:177] * Verifying Kubernetes components...
	I0317 13:59:37.652854  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:37.666591  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0317 13:59:37.666955  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0317 13:59:37.667168  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667460  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667722  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.667748  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.667987  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.668015  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.668092  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668273  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.668352  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668884  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.668932  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.671496  682662 addons.go:238] Setting addon default-storageclass=true in "flannel-788750"
	I0317 13:59:37.671568  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.671824  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.671869  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.684502  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0317 13:59:37.685086  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.685604  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.685635  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.685998  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.686195  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.687133  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0317 13:59:37.687558  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.687970  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.687999  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.688053  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.688335  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.688773  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.688810  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.689803  682662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:59:37.690875  682662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:37.690892  682662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:59:37.690913  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.694025  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694514  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.694551  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.694967  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.695127  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.695254  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.705449  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0317 13:59:37.705987  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.706407  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.706421  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.706665  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.706825  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.708262  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.708483  682662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:37.708500  682662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:59:37.708528  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.711278  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711693  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.711724  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711884  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.712049  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.712181  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.712280  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.792332  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:59:37.833072  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:38.013202  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:38.016077  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:38.285460  682662 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0317 13:59:38.286336  682662 node_ready.go:35] waiting up to 15m0s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:38.286667  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.286688  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.286982  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287000  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287008  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.287015  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.287297  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287316  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287299  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.336875  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.336910  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.337207  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.337268  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.337215  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.523826  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.523846  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524131  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524148  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.524156  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.524163  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524185  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.524385  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524400  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.525951  682662 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 13:59:35.840720  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:35.841321  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:35.841354  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:35.841303  684478 retry.go:31] will retry after 5.187885311s: waiting for domain to come up
	I0317 13:59:38.527111  682662 addons.go:514] duration metric: took 877.168808ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 13:59:38.789426  682662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-788750" context rescaled to 1 replicas
	I0317 13:59:41.035122  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.035760  684423 main.go:141] libmachine: (bridge-788750) found domain IP: 192.168.39.172
	I0317 13:59:41.035781  684423 main.go:141] libmachine: (bridge-788750) reserving static IP address...
	I0317 13:59:41.035790  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.036284  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find host DHCP lease matching {name: "bridge-788750", mac: "52:54:00:f1:de:c9", ip: "192.168.39.172"} in network mk-bridge-788750
	I0317 13:59:41.115751  684423 main.go:141] libmachine: (bridge-788750) reserved static IP address 192.168.39.172 for domain bridge-788750
	I0317 13:59:41.115782  684423 main.go:141] libmachine: (bridge-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:41.115798  684423 main.go:141] libmachine: (bridge-788750) waiting for SSH...
	I0317 13:59:41.118645  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119016  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.119063  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119199  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH client type: external
	I0317 13:59:41.119225  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa (-rw-------)
	I0317 13:59:41.119256  684423 main.go:141] libmachine: (bridge-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:41.119286  684423 main.go:141] libmachine: (bridge-788750) DBG | About to run SSH command:
	I0317 13:59:41.119302  684423 main.go:141] libmachine: (bridge-788750) DBG | exit 0
	I0317 13:59:41.239165  684423 main.go:141] libmachine: (bridge-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:41.239469  684423 main.go:141] libmachine: (bridge-788750) KVM machine creation complete
	I0317 13:59:41.239768  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:41.240358  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240533  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240709  684423 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:41.240725  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 13:59:41.242592  684423 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:41.242609  684423 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:41.242616  684423 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:41.242621  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.245217  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245580  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.245613  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245737  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.245916  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246031  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246188  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.246355  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.246654  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.246667  684423 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:41.346704  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.346735  684423 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:41.346747  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.349542  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.349892  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.349914  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.350053  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.350256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350419  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350553  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.350715  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.350978  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.350993  684423 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:41.451908  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:41.452000  684423 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:41.452014  684423 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:41.452029  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452354  684423 buildroot.go:166] provisioning hostname "bridge-788750"
	I0317 13:59:41.452380  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452554  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.455040  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455399  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.455424  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455605  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.455777  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.455930  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.456042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.456163  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.456436  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.456451  684423 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-788750 && echo "bridge-788750" | sudo tee /etc/hostname
	I0317 13:59:41.567818  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-788750
	
	I0317 13:59:41.567853  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.570485  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570807  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.570833  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570996  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.571193  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571364  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571484  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.571645  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.571862  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.571897  684423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:41.678869  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.678904  684423 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:41.678929  684423 buildroot.go:174] setting up certificates
	I0317 13:59:41.678941  684423 provision.go:84] configureAuth start
	I0317 13:59:41.678954  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.679256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:41.681754  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682060  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.682086  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682262  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.684392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684679  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.684700  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684844  684423 provision.go:143] copyHostCerts
	I0317 13:59:41.684907  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:41.684932  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:41.685003  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:41.685129  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:41.685142  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:41.685177  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:41.685262  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:41.685272  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:41.685301  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:41.685372  684423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.bridge-788750 san=[127.0.0.1 192.168.39.172 bridge-788750 localhost minikube]
	I0317 13:59:41.821887  684423 provision.go:177] copyRemoteCerts
	I0317 13:59:41.821963  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:41.821998  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.824975  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825287  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.825315  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825479  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.825693  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.825854  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.826011  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:41.905677  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:41.929529  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:41.950905  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 13:59:41.975981  684423 provision.go:87] duration metric: took 297.025637ms to configureAuth
	I0317 13:59:41.976008  684423 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:41.976155  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:41.976223  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.978872  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979159  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.979182  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979352  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.979562  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979759  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979913  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.980059  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.980356  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.980382  684423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:42.210802  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:42.210834  684423 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:42.210843  684423 main.go:141] libmachine: (bridge-788750) Calling .GetURL
	I0317 13:59:42.212236  684423 main.go:141] libmachine: (bridge-788750) DBG | using libvirt version 6000000
	I0317 13:59:42.214601  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.214997  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.215057  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.215186  684423 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:42.215202  684423 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:42.215215  684423 client.go:171] duration metric: took 25.952374084s to LocalClient.Create
	I0317 13:59:42.215251  684423 start.go:167] duration metric: took 25.952448094s to libmachine.API.Create "bridge-788750"
	I0317 13:59:42.215261  684423 start.go:293] postStartSetup for "bridge-788750" (driver="kvm2")
	I0317 13:59:42.215270  684423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:42.215295  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.215556  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:42.215589  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.217971  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218424  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.218456  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218633  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.218799  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.218975  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.219128  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.300906  684423 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:42.304516  684423 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:42.304543  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:42.304605  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:42.304685  684423 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:42.304772  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:42.313026  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:42.333975  684423 start.go:296] duration metric: took 118.700242ms for postStartSetup
	I0317 13:59:42.334033  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:42.334606  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.337068  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337371  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.337392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337630  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:42.337824  684423 start.go:128] duration metric: took 26.101149226s to createHost
	I0317 13:59:42.337851  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.339859  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340209  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.340235  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340363  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.340551  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340698  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340815  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.340963  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:42.341165  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:42.341174  684423 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:42.439850  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219982.415531589
	
	I0317 13:59:42.439872  684423 fix.go:216] guest clock: 1742219982.415531589
	I0317 13:59:42.439880  684423 fix.go:229] Guest: 2025-03-17 13:59:42.415531589 +0000 UTC Remote: 2025-03-17 13:59:42.337836583 +0000 UTC m=+27.394798300 (delta=77.695006ms)
	I0317 13:59:42.439905  684423 fix.go:200] guest clock delta is within tolerance: 77.695006ms
	I0317 13:59:42.439912  684423 start.go:83] releasing machines lock for "bridge-788750", held for 26.203397217s
	I0317 13:59:42.439939  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.440201  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.442831  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443753  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.443782  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443987  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444519  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444688  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444782  684423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:42.444829  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.444939  684423 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:42.444960  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.447411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447758  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.447784  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447802  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447875  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448064  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448237  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448251  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.448269  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.448387  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.448444  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448571  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448710  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448828  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.541855  684423 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:42.548086  684423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:42.702999  684423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:42.708812  684423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:42.708887  684423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:42.723697  684423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:42.723726  684423 start.go:495] detecting cgroup driver to use...
	I0317 13:59:42.723794  684423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:42.739584  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:42.752485  684423 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:42.752559  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:42.765024  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:42.777346  684423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:42.885029  684423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:43.046402  684423 docker.go:233] disabling docker service ...
	I0317 13:59:43.046499  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:43.060044  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:43.072350  684423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:43.187346  684423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:43.322509  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:43.337797  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:43.358051  684423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:43.358120  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.369454  684423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:43.369564  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.381103  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.392551  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.404000  684423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:43.415664  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.426423  684423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.448074  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.458706  684423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:43.470365  684423 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:43.470437  684423 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:43.483041  684423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:43.493515  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:43.630579  684423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:43.729029  684423 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:43.729100  684423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:43.733964  684423 start.go:563] Will wait 60s for crictl version
	I0317 13:59:43.734029  684423 ssh_runner.go:195] Run: which crictl
	I0317 13:59:43.737635  684423 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:43.773498  684423 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:43.773598  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.799596  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.828121  684423 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:43.829699  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:43.832890  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833374  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:43.833402  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833642  684423 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:43.838613  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:43.856973  684423 kubeadm.go:883] updating cluster {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:43.857104  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:43.857172  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:43.890166  684423 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:43.890276  684423 ssh_runner.go:195] Run: which lz4
	I0317 13:59:43.894425  684423 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:43.898332  684423 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:43.898364  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:40.289428  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:42.289498  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:44.290188  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:45.225798  684423 crio.go:462] duration metric: took 1.33140551s to copy over tarball
	I0317 13:59:45.225877  684423 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:47.413334  684423 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187422971s)
	I0317 13:59:47.413377  684423 crio.go:469] duration metric: took 2.187543023s to extract the tarball
	I0317 13:59:47.413388  684423 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:47.449993  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:47.487606  684423 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:47.487630  684423 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:47.487638  684423 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.32.2 crio true true} ...
	I0317 13:59:47.487749  684423 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0317 13:59:47.487816  684423 ssh_runner.go:195] Run: crio config
	I0317 13:59:47.534961  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:47.535001  684423 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:47.535023  684423 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-788750 NodeName:bridge-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:47.535182  684423 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:47.535265  684423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:47.545198  684423 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:47.545288  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:47.554688  684423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:47.570775  684423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:47.586074  684423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0317 13:59:47.601202  684423 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:47.604740  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:47.616014  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:47.728366  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:47.743468  684423 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750 for IP: 192.168.39.172
	I0317 13:59:47.743513  684423 certs.go:194] generating shared ca certs ...
	I0317 13:59:47.743563  684423 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.743737  684423 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:47.743797  684423 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:47.743818  684423 certs.go:256] generating profile certs ...
	I0317 13:59:47.743881  684423 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key
	I0317 13:59:47.743903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt with IP's: []
	I0317 13:59:47.925990  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt ...
	I0317 13:59:47.926022  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: {Name:mk57b03e60343324f33ad0a804eeb5fac91ff61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926184  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key ...
	I0317 13:59:47.926194  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key: {Name:mka3fd5553386d9680255eba9e4b30307d081270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926268  684423 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf
	I0317 13:59:47.926283  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0317 13:59:48.596199  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf ...
	I0317 13:59:48.596251  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf: {Name:mkbe02ed764b875a14246503fcc050fdb71db7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596488  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf ...
	I0317 13:59:48.596518  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf: {Name:mk3fa88c7fab72a1bf633ff2d7f92bde1aceb5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596660  684423 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt
	I0317 13:59:48.596782  684423 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key
	I0317 13:59:48.596878  684423 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key
	I0317 13:59:48.596903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt with IP's: []
	I0317 13:59:48.787513  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt ...
	I0317 13:59:48.787555  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt: {Name:mkd3b1b33b0e3868ee38a25e6cd6690a1040bc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787732  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key ...
	I0317 13:59:48.787744  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key: {Name:mkd529fbd19dbc16b398c1bddab0b44e7d4e1345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787912  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:48.787955  684423 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:48.787965  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:48.787986  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:48.788012  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:48.788046  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:48.788086  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:48.788618  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:48.815047  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:48.837091  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:48.858337  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:48.882013  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:48.903168  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:48.925979  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:48.946611  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:48.970676  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:48.997588  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:49.019322  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:49.041024  684423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:49.056421  684423 ssh_runner.go:195] Run: openssl version
	I0317 13:59:49.061875  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:49.072082  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076316  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076377  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.081812  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:49.092035  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:49.102272  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106676  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106727  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.112133  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:49.121990  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:49.131611  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135725  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135803  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.141146  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:49.151486  684423 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:49.155121  684423 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:49.155172  684423 kubeadm.go:392] StartCluster: {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:49.155238  684423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:49.155277  684423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:49.187716  684423 cri.go:89] found id: ""
	I0317 13:59:49.187787  684423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:49.197392  684423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:49.206456  684423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:49.215648  684423 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:49.215666  684423 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:49.215701  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:49.224457  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:49.224510  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:49.233665  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:49.245183  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:49.245257  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:49.257317  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.269822  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:49.269892  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.281019  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:49.291173  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:49.291250  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:49.303204  684423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:49.350647  684423 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:49.350717  684423 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:49.447801  684423 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:49.447928  684423 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:49.448087  684423 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:49.457405  684423 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:49.542153  684423 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:49.542293  684423 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:49.542375  684423 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:49.722255  684423 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:49.810201  684423 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:45.790781  682662 node_ready.go:49] node "flannel-788750" has status "Ready":"True"
	I0317 13:59:45.790806  682662 node_ready.go:38] duration metric: took 7.504444131s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:45.790816  682662 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:59:45.797709  682662 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 13:59:47.804134  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:50.058796  684423 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:50.325974  684423 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:50.766611  684423 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:50.766821  684423 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:50.962806  684423 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:50.962985  684423 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:51.069262  684423 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:51.154142  684423 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:51.485810  684423 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:51.486035  684423 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:51.589554  684423 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:51.703382  684423 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:51.818706  684423 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:51.939373  684423 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:52.087035  684423 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:52.087704  684423 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:52.090229  684423 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:52.093249  684423 out.go:235]   - Booting up control plane ...
	I0317 13:59:52.093382  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:52.093493  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:52.093923  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:52.111087  684423 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:52.117277  684423 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:52.117337  684423 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:52.258455  684423 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:52.258600  684423 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:53.259182  684423 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00165578s
	I0317 13:59:53.259294  684423 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:50.717873  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:52.802425  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:54.804753  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:57.758540  684423 kubeadm.go:310] [api-check] The API server is healthy after 4.501842676s
	I0317 13:59:57.770918  684423 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:57.784450  684423 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:57.821683  684423 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:57.821916  684423 kubeadm.go:310] [mark-control-plane] Marking the node bridge-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:57.835685  684423 kubeadm.go:310] [bootstrap-token] Using token: 6r2rfy.f4amir38rs4aheab
	I0317 13:59:57.836800  684423 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:57.836921  684423 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:57.842871  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:57.849820  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:57.853155  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:57.856545  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:57.862086  684423 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:58.165290  684423 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:58.587281  684423 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:59.166763  684423 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:59.167778  684423 kubeadm.go:310] 
	I0317 13:59:59.167887  684423 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:59.167901  684423 kubeadm.go:310] 
	I0317 13:59:59.167991  684423 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:59.168016  684423 kubeadm.go:310] 
	I0317 13:59:59.168054  684423 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:59.168111  684423 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:59.168153  684423 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:59.168159  684423 kubeadm.go:310] 
	I0317 13:59:59.168201  684423 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:59.168207  684423 kubeadm.go:310] 
	I0317 13:59:59.168245  684423 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:59.168250  684423 kubeadm.go:310] 
	I0317 13:59:59.168299  684423 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:59.168425  684423 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:59.168502  684423 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:59.168521  684423 kubeadm.go:310] 
	I0317 13:59:59.168648  684423 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:59.168770  684423 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:59.168781  684423 kubeadm.go:310] 
	I0317 13:59:59.168894  684423 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169039  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:59.169080  684423 kubeadm.go:310] 	--control-plane 
	I0317 13:59:59.169096  684423 kubeadm.go:310] 
	I0317 13:59:59.169180  684423 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:59.169193  684423 kubeadm.go:310] 
	I0317 13:59:59.169265  684423 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169358  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:59.170059  684423 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:59.170143  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:59.171940  684423 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:59:59.173180  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:59:59.183169  684423 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:59:59.199645  684423 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:59.199744  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.199776  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=bridge-788750 minikube.k8s.io/primary=true
	I0317 13:59:59.239955  684423 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:59.366223  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.867211  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:57.304825  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:59.803981  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:00.367289  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:00.866372  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.366507  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.866445  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.366509  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.866437  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.367015  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.450761  684423 kubeadm.go:1113] duration metric: took 4.251073397s to wait for elevateKubeSystemPrivileges
	I0317 14:00:03.450805  684423 kubeadm.go:394] duration metric: took 14.295636291s to StartCluster
	I0317 14:00:03.450831  684423 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.450907  684423 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 14:00:03.451925  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.452144  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 14:00:03.452156  684423 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 14:00:03.452210  684423 addons.go:69] Setting storage-provisioner=true in profile "bridge-788750"
	I0317 14:00:03.452229  684423 addons.go:238] Setting addon storage-provisioner=true in "bridge-788750"
	I0317 14:00:03.452140  684423 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 14:00:03.452274  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.452383  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 14:00:03.452233  684423 addons.go:69] Setting default-storageclass=true in profile "bridge-788750"
	I0317 14:00:03.452450  684423 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-788750"
	I0317 14:00:03.452759  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452797  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.452814  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452848  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.454590  684423 out.go:177] * Verifying Kubernetes components...
	I0317 14:00:03.456094  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 14:00:03.468607  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0317 14:00:03.468791  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0317 14:00:03.469225  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469232  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469737  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469751  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.469902  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469927  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.470138  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470336  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470543  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.470733  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.470780  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.474090  684423 addons.go:238] Setting addon default-storageclass=true in "bridge-788750"
	I0317 14:00:03.474136  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.474497  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.474557  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.490576  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0317 14:00:03.491139  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.491781  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.491813  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.492238  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.492487  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.493292  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I0317 14:00:03.493769  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.494289  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.494321  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.494560  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.494679  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.495346  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.495400  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.496399  684423 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 14:00:02.303406  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:03.303401  682662 pod_ready.go:93] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.303427  682662 pod_ready.go:82] duration metric: took 17.505677844s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.303436  682662 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306929  682662 pod_ready.go:93] pod "etcd-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.306947  682662 pod_ready.go:82] duration metric: took 3.50631ms for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306955  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310273  682662 pod_ready.go:93] pod "kube-apiserver-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.310298  682662 pod_ready.go:82] duration metric: took 3.335994ms for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310311  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314183  682662 pod_ready.go:93] pod "kube-controller-manager-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.314198  682662 pod_ready.go:82] duration metric: took 3.880278ms for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314205  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318021  682662 pod_ready.go:93] pod "kube-proxy-drfjv" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.318036  682662 pod_ready.go:82] duration metric: took 3.826269ms for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318043  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702302  682662 pod_ready.go:93] pod "kube-scheduler-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.702331  682662 pod_ready.go:82] duration metric: took 384.281244ms for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702346  682662 pod_ready.go:39] duration metric: took 17.911515691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:03.702367  682662 api_server.go:52] waiting for apiserver process to appear ...
	I0317 14:00:03.702433  682662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 14:00:03.717069  682662 api_server.go:72] duration metric: took 26.067154095s to wait for apiserver process to appear ...
	I0317 14:00:03.717103  682662 api_server.go:88] waiting for apiserver healthz status ...
	I0317 14:00:03.717125  682662 api_server.go:253] Checking apiserver healthz at https://192.168.72.30:8443/healthz ...
	I0317 14:00:03.722046  682662 api_server.go:279] https://192.168.72.30:8443/healthz returned 200:
	ok
	I0317 14:00:03.723179  682662 api_server.go:141] control plane version: v1.32.2
	I0317 14:00:03.723202  682662 api_server.go:131] duration metric: took 6.092065ms to wait for apiserver health ...
	I0317 14:00:03.723210  682662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 14:00:03.901881  682662 system_pods.go:59] 7 kube-system pods found
	I0317 14:00:03.901916  682662 system_pods.go:61] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:03.901922  682662 system_pods.go:61] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:03.901926  682662 system_pods.go:61] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:03.901930  682662 system_pods.go:61] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:03.901934  682662 system_pods.go:61] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:03.901937  682662 system_pods.go:61] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:03.901940  682662 system_pods.go:61] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:03.901947  682662 system_pods.go:74] duration metric: took 178.731729ms to wait for pod list to return data ...
	I0317 14:00:03.901954  682662 default_sa.go:34] waiting for default service account to be created ...
	I0317 14:00:04.103094  682662 default_sa.go:45] found service account: "default"
	I0317 14:00:04.103124  682662 default_sa.go:55] duration metric: took 201.164871ms for default service account to be created ...
	I0317 14:00:04.103135  682662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 14:00:04.301791  682662 system_pods.go:86] 7 kube-system pods found
	I0317 14:00:04.301829  682662 system_pods.go:89] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:04.301836  682662 system_pods.go:89] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:04.301840  682662 system_pods.go:89] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:04.301843  682662 system_pods.go:89] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:04.301847  682662 system_pods.go:89] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:04.301850  682662 system_pods.go:89] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:04.301854  682662 system_pods.go:89] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:04.301864  682662 system_pods.go:126] duration metric: took 198.721059ms to wait for k8s-apps to be running ...
	I0317 14:00:04.301875  682662 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 14:00:04.301935  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 14:00:04.316032  682662 system_svc.go:56] duration metric: took 14.14678ms WaitForService to wait for kubelet
	I0317 14:00:04.316064  682662 kubeadm.go:582] duration metric: took 26.666157602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 14:00:04.316080  682662 node_conditions.go:102] verifying NodePressure condition ...
	I0317 14:00:04.501869  682662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 14:00:04.501907  682662 node_conditions.go:123] node cpu capacity is 2
	I0317 14:00:04.501923  682662 node_conditions.go:105] duration metric: took 185.838303ms to run NodePressure ...
	I0317 14:00:04.501938  682662 start.go:241] waiting for startup goroutines ...
	I0317 14:00:04.501947  682662 start.go:246] waiting for cluster config update ...
	I0317 14:00:04.501961  682662 start.go:255] writing updated cluster config ...
	I0317 14:00:04.502390  682662 ssh_runner.go:195] Run: rm -f paused
	I0317 14:00:04.560415  682662 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 14:00:04.562095  682662 out.go:177] * Done! kubectl is now configured to use "flannel-788750" cluster and "default" namespace by default
	I0317 14:00:03.498107  684423 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:03.498129  684423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 14:00:03.498150  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.501995  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502514  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.502535  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502849  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.503042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.503202  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.503328  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.513291  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0317 14:00:03.513983  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.514587  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.514619  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.515049  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.515268  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.516963  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.517201  684423 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.517224  684423 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 14:00:03.517247  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.520046  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520541  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.520586  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520647  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.520817  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.520958  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.521075  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.660151  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 14:00:03.679967  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 14:00:03.844532  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.876251  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:04.011057  684423 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0317 14:00:04.012032  684423 node_ready.go:35] waiting up to 15m0s for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024328  684423 node_ready.go:49] node "bridge-788750" has status "Ready":"True"
	I0317 14:00:04.024353  684423 node_ready.go:38] duration metric: took 12.290285ms for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024365  684423 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:04.028595  684423 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:04.253238  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253271  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253666  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.253690  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.253696  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.253704  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253714  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253988  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.254006  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.262922  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.262941  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.263255  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.263297  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.263312  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.515692  684423 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-788750" context rescaled to 1 replicas
	I0317 14:00:04.766030  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766064  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766393  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.766450  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766466  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.766480  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766489  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766715  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766733  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.768395  684423 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 14:00:04.769582  684423 addons.go:514] duration metric: took 1.317420787s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 14:00:08.100227  673643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 14:00:08.100326  673643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 14:00:08.101702  673643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 14:00:08.101771  673643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 14:00:08.101843  673643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 14:00:08.101949  673643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 14:00:08.102103  673643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 14:00:08.102213  673643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 14:00:08.103900  673643 out.go:235]   - Generating certificates and keys ...
	I0317 14:00:08.103990  673643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 14:00:08.104047  673643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 14:00:08.104124  673643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 14:00:08.104200  673643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 14:00:08.104303  673643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 14:00:08.104384  673643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 14:00:08.104471  673643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 14:00:08.104558  673643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 14:00:08.104655  673643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 14:00:08.104750  673643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 14:00:08.104799  673643 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 14:00:08.104865  673643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 14:00:08.104953  673643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 14:00:08.105028  673643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 14:00:08.105106  673643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 14:00:08.105200  673643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 14:00:08.105374  673643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 14:00:08.105449  673643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 14:00:08.105497  673643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 14:00:08.105613  673643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 14:00:08.107093  673643 out.go:235]   - Booting up control plane ...
	I0317 14:00:08.107203  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 14:00:08.107321  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 14:00:08.107412  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 14:00:08.107544  673643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 14:00:08.107730  673643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 14:00:08.107811  673643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 14:00:08.107903  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108136  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108241  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108504  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108614  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108874  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108968  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109174  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109230  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109440  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109465  673643 kubeadm.go:310] 
	I0317 14:00:08.109515  673643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 14:00:08.109565  673643 kubeadm.go:310] 		timed out waiting for the condition
	I0317 14:00:08.109575  673643 kubeadm.go:310] 
	I0317 14:00:08.109617  673643 kubeadm.go:310] 	This error is likely caused by:
	I0317 14:00:08.109657  673643 kubeadm.go:310] 		- The kubelet is not running
	I0317 14:00:08.109782  673643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 14:00:08.109799  673643 kubeadm.go:310] 
	I0317 14:00:08.109930  673643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 14:00:08.109984  673643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 14:00:08.110027  673643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 14:00:08.110035  673643 kubeadm.go:310] 
	I0317 14:00:08.110118  673643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 14:00:08.110184  673643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 14:00:08.110190  673643 kubeadm.go:310] 
	I0317 14:00:08.110328  673643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 14:00:08.110435  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 14:00:08.110496  673643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 14:00:08.110562  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 14:00:08.110602  673643 kubeadm.go:310] 
	I0317 14:00:08.110625  673643 kubeadm.go:394] duration metric: took 7m57.828587617s to StartCluster
	I0317 14:00:08.110682  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 14:00:08.110737  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 14:00:08.142741  673643 cri.go:89] found id: ""
	I0317 14:00:08.142781  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.142795  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 14:00:08.142804  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 14:00:08.142877  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 14:00:08.174753  673643 cri.go:89] found id: ""
	I0317 14:00:08.174784  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.174796  673643 logs.go:284] No container was found matching "etcd"
	I0317 14:00:08.174804  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 14:00:08.174859  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 14:00:08.204965  673643 cri.go:89] found id: ""
	I0317 14:00:08.204997  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.205009  673643 logs.go:284] No container was found matching "coredns"
	I0317 14:00:08.205017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 14:00:08.205081  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 14:00:08.235717  673643 cri.go:89] found id: ""
	I0317 14:00:08.235749  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.235757  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 14:00:08.235767  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 14:00:08.235833  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 14:00:08.265585  673643 cri.go:89] found id: ""
	I0317 14:00:08.265613  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.265623  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 14:00:08.265631  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 14:00:08.265718  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 14:00:08.295600  673643 cri.go:89] found id: ""
	I0317 14:00:08.295629  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.295641  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 14:00:08.295648  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 14:00:08.295713  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 14:00:08.327749  673643 cri.go:89] found id: ""
	I0317 14:00:08.327778  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.327787  673643 logs.go:284] No container was found matching "kindnet"
	I0317 14:00:08.327794  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 14:00:08.327855  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 14:00:08.359913  673643 cri.go:89] found id: ""
	I0317 14:00:08.359944  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.359952  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 14:00:08.359962  673643 logs.go:123] Gathering logs for container status ...
	I0317 14:00:08.359975  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 14:00:08.396929  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 14:00:08.396959  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 14:00:08.451498  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 14:00:08.451556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 14:00:08.464742  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 14:00:08.464771  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 14:00:08.537703  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 14:00:08.537733  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 14:00:08.537749  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0317 14:00:08.658936  673643 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 14:00:08.659006  673643 out.go:270] * 
	W0317 14:00:08.659061  673643 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.659074  673643 out.go:270] * 
	W0317 14:00:08.659944  673643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 14:00:08.663521  673643 out.go:201] 
	W0317 14:00:08.664750  673643 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.664794  673643 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 14:00:08.664812  673643 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 14:00:08.666351  673643 out.go:201] 
	
	
	==> CRI-O <==
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.663801251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220009663781733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb775594-a6c2-44b0-b286-61204b98a512 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.664228778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c68ded5b-c377-47f9-803b-070b1747ceec name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.664279671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c68ded5b-c377-47f9-803b-070b1747ceec name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.664308782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c68ded5b-c377-47f9-803b-070b1747ceec name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.693196996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c5d590-d09e-4311-93da-bcf6b3a6a995 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.693308310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c5d590-d09e-4311-93da-bcf6b3a6a995 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.694259109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59b4b6c2-4cba-4bfd-8cf3-5d97d1fab6b6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.694639256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220009694615792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59b4b6c2-4cba-4bfd-8cf3-5d97d1fab6b6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.695202865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5c1d855-181d-4b26-b469-fbd6003bb418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.695248876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5c1d855-181d-4b26-b469-fbd6003bb418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.695279880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d5c1d855-181d-4b26-b469-fbd6003bb418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.724345857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84671b34-634f-4e93-813b-7d2094b9c182 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.724414476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84671b34-634f-4e93-813b-7d2094b9c182 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.725670314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db4f2deb-2f1f-470d-921e-9d277f5d2bd9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.726031878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220009726011842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db4f2deb-2f1f-470d-921e-9d277f5d2bd9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.726475311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44541517-bb99-4148-92df-7c9e54ff2e9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.726527490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44541517-bb99-4148-92df-7c9e54ff2e9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.726608520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=44541517-bb99-4148-92df-7c9e54ff2e9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.757640864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45491a66-3f68-4bdb-a392-7b07f10d24b5 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.757711787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45491a66-3f68-4bdb-a392-7b07f10d24b5 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.758769399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ba1c469-a281-4538-b144-e7274cd15ddd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.759139197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220009759118077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ba1c469-a281-4538-b144-e7274cd15ddd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.759718975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=347e9f64-1951-494b-87ff-70b2bae265d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.759784437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=347e9f64-1951-494b-87ff-70b2bae265d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:00:09 old-k8s-version-803027 crio[632]: time="2025-03-17 14:00:09.759818284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=347e9f64-1951-494b-87ff-70b2bae265d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar17 13:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049623] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037597] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.930656] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar17 13:52] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.058241] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060217] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.180401] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.139897] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.253988] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.875702] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060155] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.298688] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.779171] kauditd_printk_skb: 46 callbacks suppressed
	[Mar17 13:56] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Mar17 13:58] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.074533] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:00:09 up 8 min,  0 users,  load average: 0.05, 0.09, 0.06
	Linux old-k8s-version-803027 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000bc7200)
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: goroutine 155 [select]:
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6def0, 0x4f0ac20, 0xc0009251d0, 0x1, 0xc00009e0c0)
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001d2380, 0xc00009e0c0)
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc0f10, 0xc000bb7d40)
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 17 14:00:08 old-k8s-version-803027 kubelet[5483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 17 14:00:08 old-k8s-version-803027 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 17 14:00:08 old-k8s-version-803027 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 17 14:00:09 old-k8s-version-803027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 17 14:00:09 old-k8s-version-803027 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 17 14:00:09 old-k8s-version-803027 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 17 14:00:09 old-k8s-version-803027 kubelet[5554]: I0317 14:00:09.320986    5554 server.go:416] Version: v1.20.0
	Mar 17 14:00:09 old-k8s-version-803027 kubelet[5554]: I0317 14:00:09.321432    5554 server.go:837] Client rotation is on, will bootstrap in background
	Mar 17 14:00:09 old-k8s-version-803027 kubelet[5554]: I0317 14:00:09.323993    5554 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 17 14:00:09 old-k8s-version-803027 kubelet[5554]: I0317 14:00:09.325070    5554 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 17 14:00:09 old-k8s-version-803027 kubelet[5554]: W0317 14:00:09.325129    5554 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (234.082219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-803027" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
I0317 14:00:17.404287  629188 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:00:26.054615  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:01:47.976456  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:03.781534  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:03.787896  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:03.799210  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:03.820554  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:03.862021  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:03.943517  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:04.105198  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:04.427059  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:05.068490  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:06.350171  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:08.911632  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:12.573058  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:14.033986  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:19.524962  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:24.275799  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:38.185771  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.192292  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.203832  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.225345  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.266884  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:38.348263  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.509894  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:02:38.831695  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:39.473665  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:40.755448  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:43.317448  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:44.757868  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:47.229721  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:48.439749  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:02:58.681802  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:15.512361  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.518716  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.530057  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.551452  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.592886  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.674343  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:15.835891  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:16.157568  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:16.799377  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:18.080712  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:19.163209  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:20.642837  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:25.719680  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:25.765155  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:35.643601  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:36.007479  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:44.452738  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:47.686835  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:47.693298  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:47.704758  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:47.726228  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:47.767776  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:47.849351  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:48.011042  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:48.332895  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:03:48.975013  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:50.256791  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:52.818799  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:56.489372  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:03:57.940911  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:00.124569  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:04.111836  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:06.911138  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:06.917624  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:06.929024  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:06.950560  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:06.992078  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:07.073658  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:07.235321  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:07.556870  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:08.182358  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:08.199122  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:09.481432  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:12.043185  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:17.165151  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:27.407324  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:28.664542  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:31.818024  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:37.450815  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:04:47.641662  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:04:47.889564  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:04.591516  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.597916  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.609294  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.630662  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.672052  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.753614  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:04.915199  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:05.236965  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:05.879202  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:07.161135  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:09.626485  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:09.723162  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:14.845465  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:17.594376  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.600730  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.612098  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.633499  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.674883  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.756530  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:17.918070  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
E0317 14:05:18.239989  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:18.882054  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:20.163763  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:22.046958  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:22.725213  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:25.087635  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:27.847136  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:28.851691  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:38.088624  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:45.569686  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:58.569988  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:05:59.373156  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:06:26.532016  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:06:31.548058  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:06:39.532094  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:06:50.773762  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:03.782529  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:12.573306  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:19.525539  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:31.483612  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:38.185770  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:07:48.453983  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:01.453510  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:05.888979  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:15.512404  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:43.214806  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:44.452694  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:08:47.686673  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:09:04.112159  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:09:06.911844  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (224.655568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-803027" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (219.7045ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-803027 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-788750 sudo iptables                       | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo docker                         | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo find                           | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo crio                           | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-788750                                     | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:59:14
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:59:14.981692  684423 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:59:14.981852  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.981867  684423 out.go:358] Setting ErrFile to fd 2...
	I0317 13:59:14.981874  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.982141  684423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:59:14.982809  684423 out.go:352] Setting JSON to false
	I0317 13:59:14.984111  684423 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13299,"bootTime":1742206656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:59:14.984207  684423 start.go:139] virtualization: kvm guest
	I0317 13:59:14.986343  684423 out.go:177] * [bridge-788750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:59:14.987706  684423 notify.go:220] Checking for updates...
	I0317 13:59:14.987715  684423 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:59:14.989330  684423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:59:14.990916  684423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:14.992287  684423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:14.993610  684423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:59:14.995116  684423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:59:14.997007  684423 config.go:182] Loaded profile config "enable-default-cni-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997099  684423 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997181  684423 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:59:14.997265  684423 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:59:15.035775  684423 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:59:14.820648  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821374  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has current primary IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821404  682662 main.go:141] libmachine: (flannel-788750) found domain IP: 192.168.72.30
	I0317 13:59:14.821418  682662 main.go:141] libmachine: (flannel-788750) reserving static IP address...
	I0317 13:59:14.821957  682662 main.go:141] libmachine: (flannel-788750) DBG | unable to find host DHCP lease matching {name: "flannel-788750", mac: "52:54:00:55:e8:19", ip: "192.168.72.30"} in network mk-flannel-788750
	I0317 13:59:14.906769  682662 main.go:141] libmachine: (flannel-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:14.906805  682662 main.go:141] libmachine: (flannel-788750) reserved static IP address 192.168.72.30 for domain flannel-788750
	I0317 13:59:14.906819  682662 main.go:141] libmachine: (flannel-788750) waiting for SSH...
	I0317 13:59:14.909743  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910088  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:14.910120  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910299  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH client type: external
	I0317 13:59:14.910327  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa (-rw-------)
	I0317 13:59:14.910360  682662 main.go:141] libmachine: (flannel-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:14.910373  682662 main.go:141] libmachine: (flannel-788750) DBG | About to run SSH command:
	I0317 13:59:14.910388  682662 main.go:141] libmachine: (flannel-788750) DBG | exit 0
	I0317 13:59:15.039803  682662 main.go:141] libmachine: (flannel-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:15.040031  682662 main.go:141] libmachine: (flannel-788750) KVM machine creation complete
	I0317 13:59:15.040330  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:15.040923  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041146  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041319  682662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:15.041338  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:15.037243  684423 start.go:297] selected driver: kvm2
	I0317 13:59:15.037267  684423 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:59:15.037287  684423 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:59:15.038541  684423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.038644  684423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:59:15.057562  684423 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:59:15.057627  684423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:59:15.057863  684423 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:59:15.057895  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:15.057901  684423 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:59:15.057945  684423 start.go:340] cluster config:
	{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:15.058020  684423 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.059766  684423 out.go:177] * Starting "bridge-788750" primary control-plane node in "bridge-788750" cluster
	I0317 13:59:15.061061  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:15.061110  684423 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:59:15.061133  684423 cache.go:56] Caching tarball of preloaded images
	I0317 13:59:15.061226  684423 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:59:15.061242  684423 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:59:15.061359  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:15.061391  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json: {Name:mkeb86f621957feb90cebae88f4bfc025146aa69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:15.061584  684423 start.go:360] acquireMachinesLock for bridge-788750: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:59:16.236468  684423 start.go:364] duration metric: took 1.174838631s to acquireMachinesLock for "bridge-788750"
	I0317 13:59:16.236553  684423 start.go:93] Provisioning new machine with config: &{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:16.236662  684423 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:59:15.042960  682662 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:15.042977  682662 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:15.042984  682662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:15.042994  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.046053  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046440  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.046460  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046654  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.047369  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047564  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047723  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.047905  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.048115  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.048125  682662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:15.155136  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.155161  682662 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:15.155171  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.157989  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158314  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.158344  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158604  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.158819  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.158982  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.159164  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.159287  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.159569  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.159584  682662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:15.263937  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:15.264006  682662 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:15.264013  682662 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:15.264021  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264323  682662 buildroot.go:166] provisioning hostname "flannel-788750"
	I0317 13:59:15.264358  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264595  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.267397  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.267894  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.267919  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.268123  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.268363  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268540  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268702  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.268870  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.269106  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.269121  682662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-788750 && echo "flannel-788750" | sudo tee /etc/hostname
	I0317 13:59:15.393761  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-788750
	
	I0317 13:59:15.393795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.396701  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397053  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.397079  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397315  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.397527  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397685  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397812  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.397956  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.398219  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.398235  682662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:15.512038  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.512072  682662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:15.512093  682662 buildroot.go:174] setting up certificates
	I0317 13:59:15.512102  682662 provision.go:84] configureAuth start
	I0317 13:59:15.512110  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.512392  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:15.515143  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515466  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.515492  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515711  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.517703  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.517986  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.518013  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.518136  682662 provision.go:143] copyHostCerts
	I0317 13:59:15.518194  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:15.518211  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:15.518281  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:15.518370  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:15.518378  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:15.518401  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:15.518451  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:15.518459  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:15.518487  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:15.518537  682662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.flannel-788750 san=[127.0.0.1 192.168.72.30 flannel-788750 localhost minikube]
	I0317 13:59:15.606367  682662 provision.go:177] copyRemoteCerts
	I0317 13:59:15.606436  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:15.606478  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.608965  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609288  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.609320  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609467  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.609677  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.609868  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.610035  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:15.692959  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:15.715060  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0317 13:59:15.736168  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:15.757347  682662 provision.go:87] duration metric: took 245.231065ms to configureAuth
	I0317 13:59:15.757375  682662 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:15.757523  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:15.757599  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.760083  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760447  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.760473  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760703  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.760886  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761040  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761189  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.761364  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.761619  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.761640  682662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:15.989797  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:15.989831  682662 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:15.989841  682662 main.go:141] libmachine: (flannel-788750) Calling .GetURL
	I0317 13:59:15.991175  682662 main.go:141] libmachine: (flannel-788750) DBG | using libvirt version 6000000
	I0317 13:59:15.993619  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.993970  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.993998  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.994173  682662 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:15.994189  682662 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:15.994198  682662 client.go:171] duration metric: took 25.832600711s to LocalClient.Create
	I0317 13:59:15.994227  682662 start.go:167] duration metric: took 25.832673652s to libmachine.API.Create "flannel-788750"
	I0317 13:59:15.994239  682662 start.go:293] postStartSetup for "flannel-788750" (driver="kvm2")
	I0317 13:59:15.994255  682662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:15.994280  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.994552  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:15.994591  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.996836  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997188  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.997218  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997354  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.997523  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.997708  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.997830  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.082655  682662 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:16.086465  682662 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:16.086500  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:16.086557  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:16.086623  682662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:16.086707  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:16.096327  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:16.120987  682662 start.go:296] duration metric: took 126.730504ms for postStartSetup
	I0317 13:59:16.121051  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:16.121795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.124252  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124669  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.124695  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124960  682662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/config.json ...
	I0317 13:59:16.125174  682662 start.go:128] duration metric: took 25.986754439s to createHost
	I0317 13:59:16.125209  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.127973  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128376  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.128405  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128538  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.128709  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.128874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.129023  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.129206  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:16.129486  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:16.129501  682662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:16.236319  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219956.214556978
	
	I0317 13:59:16.236345  682662 fix.go:216] guest clock: 1742219956.214556978
	I0317 13:59:16.236353  682662 fix.go:229] Guest: 2025-03-17 13:59:16.214556978 +0000 UTC Remote: 2025-03-17 13:59:16.125191891 +0000 UTC m=+26.132597802 (delta=89.365087ms)
	I0317 13:59:16.236374  682662 fix.go:200] guest clock delta is within tolerance: 89.365087ms
	I0317 13:59:16.236379  682662 start.go:83] releasing machines lock for "flannel-788750", held for 26.098086792s
	I0317 13:59:16.236406  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.236717  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.240150  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.242931  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.242954  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.243184  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.243857  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247621  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247686  682662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:16.247747  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.247854  682662 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:16.247879  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.251119  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251267  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251402  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251424  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251567  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251590  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251600  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251792  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.251958  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.252029  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252213  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.252268  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252413  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.336552  682662 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:16.372394  682662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:16.543479  682662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:16.549196  682662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:16.549278  682662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:16.567894  682662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:16.567923  682662 start.go:495] detecting cgroup driver to use...
	I0317 13:59:16.568007  682662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:16.591718  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:16.606627  682662 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:16.606699  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:16.620043  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:16.635200  682662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:16.752393  682662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:16.899089  682662 docker.go:233] disabling docker service ...
	I0317 13:59:16.899148  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:16.914164  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:16.928117  682662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:17.053498  682662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:17.189186  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:17.203833  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:17.223316  682662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:17.223397  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.233530  682662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:17.233601  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.243490  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.253607  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.263744  682662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:17.274183  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.287378  682662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.303360  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.313576  682662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:17.322490  682662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:17.322555  682662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:17.336395  682662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:17.345254  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:17.458590  682662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:17.543773  682662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:17.543842  682662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:17.548368  682662 start.go:563] Will wait 60s for crictl version
	I0317 13:59:17.548436  682662 ssh_runner.go:195] Run: which crictl
	I0317 13:59:17.552779  682662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:17.595329  682662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:17.595419  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.621136  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.650209  682662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:16.239781  684423 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0317 13:59:16.239987  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:16.240028  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:16.260585  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0317 13:59:16.261043  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:16.261626  684423 main.go:141] libmachine: Using API Version  1
	I0317 13:59:16.261650  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:16.262203  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:16.262429  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:16.262618  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:16.262805  684423 start.go:159] libmachine.API.Create for "bridge-788750" (driver="kvm2")
	I0317 13:59:16.262832  684423 client.go:168] LocalClient.Create starting
	I0317 13:59:16.262873  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:59:16.262914  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.262936  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263026  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:59:16.263055  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.263627  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263688  684423 main.go:141] libmachine: Running pre-create checks...
	I0317 13:59:16.263699  684423 main.go:141] libmachine: (bridge-788750) Calling .PreCreateCheck
	I0317 13:59:16.265317  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:16.266685  684423 main.go:141] libmachine: Creating machine...
	I0317 13:59:16.266703  684423 main.go:141] libmachine: (bridge-788750) Calling .Create
	I0317 13:59:16.266873  684423 main.go:141] libmachine: (bridge-788750) creating KVM machine...
	I0317 13:59:16.266894  684423 main.go:141] libmachine: (bridge-788750) creating network...
	I0317 13:59:16.268321  684423 main.go:141] libmachine: (bridge-788750) DBG | found existing default KVM network
	I0317 13:59:16.270323  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.270123  684478 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013de0}
	I0317 13:59:16.270347  684423 main.go:141] libmachine: (bridge-788750) DBG | created network xml: 
	I0317 13:59:16.270356  684423 main.go:141] libmachine: (bridge-788750) DBG | <network>
	I0317 13:59:16.270365  684423 main.go:141] libmachine: (bridge-788750) DBG |   <name>mk-bridge-788750</name>
	I0317 13:59:16.270372  684423 main.go:141] libmachine: (bridge-788750) DBG |   <dns enable='no'/>
	I0317 13:59:16.270379  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270388  684423 main.go:141] libmachine: (bridge-788750) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0317 13:59:16.270396  684423 main.go:141] libmachine: (bridge-788750) DBG |     <dhcp>
	I0317 13:59:16.270404  684423 main.go:141] libmachine: (bridge-788750) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0317 13:59:16.270412  684423 main.go:141] libmachine: (bridge-788750) DBG |     </dhcp>
	I0317 13:59:16.270418  684423 main.go:141] libmachine: (bridge-788750) DBG |   </ip>
	I0317 13:59:16.270426  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270432  684423 main.go:141] libmachine: (bridge-788750) DBG | </network>
	I0317 13:59:16.270440  684423 main.go:141] libmachine: (bridge-788750) DBG | 
	I0317 13:59:16.276393  684423 main.go:141] libmachine: (bridge-788750) DBG | trying to create private KVM network mk-bridge-788750 192.168.39.0/24...
	I0317 13:59:16.361973  684423 main.go:141] libmachine: (bridge-788750) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.362009  684423 main.go:141] libmachine: (bridge-788750) DBG | private KVM network mk-bridge-788750 192.168.39.0/24 created
	I0317 13:59:16.362022  684423 main.go:141] libmachine: (bridge-788750) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:59:16.362044  684423 main.go:141] libmachine: (bridge-788750) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:59:16.362105  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.359405  684478 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.657775  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.657652  684478 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa...
	I0317 13:59:16.896870  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896712  684478 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk...
	I0317 13:59:16.896904  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing magic tar header
	I0317 13:59:16.896919  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing SSH key tar header
	I0317 13:59:16.896931  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896829  684478 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.896949  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750
	I0317 13:59:16.896963  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 (perms=drwx------)
	I0317 13:59:16.896975  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:59:16.896989  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.897000  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:59:16.897011  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:59:16.897027  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:59:16.897040  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:59:16.897049  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:59:16.897059  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins
	I0317 13:59:16.897070  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home
	I0317 13:59:16.897081  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:59:16.897102  684423 main.go:141] libmachine: (bridge-788750) DBG | skipping /home - not owner
	I0317 13:59:16.897114  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:59:16.897128  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:16.898240  684423 main.go:141] libmachine: (bridge-788750) define libvirt domain using xml: 
	I0317 13:59:16.898267  684423 main.go:141] libmachine: (bridge-788750) <domain type='kvm'>
	I0317 13:59:16.898276  684423 main.go:141] libmachine: (bridge-788750)   <name>bridge-788750</name>
	I0317 13:59:16.898286  684423 main.go:141] libmachine: (bridge-788750)   <memory unit='MiB'>3072</memory>
	I0317 13:59:16.898328  684423 main.go:141] libmachine: (bridge-788750)   <vcpu>2</vcpu>
	I0317 13:59:16.898361  684423 main.go:141] libmachine: (bridge-788750)   <features>
	I0317 13:59:16.898371  684423 main.go:141] libmachine: (bridge-788750)     <acpi/>
	I0317 13:59:16.898379  684423 main.go:141] libmachine: (bridge-788750)     <apic/>
	I0317 13:59:16.898391  684423 main.go:141] libmachine: (bridge-788750)     <pae/>
	I0317 13:59:16.898401  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898409  684423 main.go:141] libmachine: (bridge-788750)   </features>
	I0317 13:59:16.898419  684423 main.go:141] libmachine: (bridge-788750)   <cpu mode='host-passthrough'>
	I0317 13:59:16.898428  684423 main.go:141] libmachine: (bridge-788750)   
	I0317 13:59:16.898436  684423 main.go:141] libmachine: (bridge-788750)   </cpu>
	I0317 13:59:16.898444  684423 main.go:141] libmachine: (bridge-788750)   <os>
	I0317 13:59:16.898452  684423 main.go:141] libmachine: (bridge-788750)     <type>hvm</type>
	I0317 13:59:16.898460  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='cdrom'/>
	I0317 13:59:16.898470  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='hd'/>
	I0317 13:59:16.898477  684423 main.go:141] libmachine: (bridge-788750)     <bootmenu enable='no'/>
	I0317 13:59:16.898485  684423 main.go:141] libmachine: (bridge-788750)   </os>
	I0317 13:59:16.898492  684423 main.go:141] libmachine: (bridge-788750)   <devices>
	I0317 13:59:16.898506  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='cdrom'>
	I0317 13:59:16.898519  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/boot2docker.iso'/>
	I0317 13:59:16.898532  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hdc' bus='scsi'/>
	I0317 13:59:16.898550  684423 main.go:141] libmachine: (bridge-788750)       <readonly/>
	I0317 13:59:16.898559  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898568  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='disk'>
	I0317 13:59:16.898584  684423 main.go:141] libmachine: (bridge-788750)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:59:16.898615  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk'/>
	I0317 13:59:16.898627  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hda' bus='virtio'/>
	I0317 13:59:16.898636  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898646  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898676  684423 main.go:141] libmachine: (bridge-788750)       <source network='mk-bridge-788750'/>
	I0317 13:59:16.898700  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898709  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898721  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898738  684423 main.go:141] libmachine: (bridge-788750)       <source network='default'/>
	I0317 13:59:16.898748  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898763  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898776  684423 main.go:141] libmachine: (bridge-788750)     <serial type='pty'>
	I0317 13:59:16.898787  684423 main.go:141] libmachine: (bridge-788750)       <target port='0'/>
	I0317 13:59:16.898794  684423 main.go:141] libmachine: (bridge-788750)     </serial>
	I0317 13:59:16.898802  684423 main.go:141] libmachine: (bridge-788750)     <console type='pty'>
	I0317 13:59:16.898813  684423 main.go:141] libmachine: (bridge-788750)       <target type='serial' port='0'/>
	I0317 13:59:16.898819  684423 main.go:141] libmachine: (bridge-788750)     </console>
	I0317 13:59:16.898831  684423 main.go:141] libmachine: (bridge-788750)     <rng model='virtio'>
	I0317 13:59:16.898839  684423 main.go:141] libmachine: (bridge-788750)       <backend model='random'>/dev/random</backend>
	I0317 13:59:16.898851  684423 main.go:141] libmachine: (bridge-788750)     </rng>
	I0317 13:59:16.898874  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898906  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898924  684423 main.go:141] libmachine: (bridge-788750)   </devices>
	I0317 13:59:16.898943  684423 main.go:141] libmachine: (bridge-788750) </domain>
	I0317 13:59:16.898963  684423 main.go:141] libmachine: (bridge-788750) 
	I0317 13:59:16.903437  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:d3:5c:cd in network default
	I0317 13:59:16.904002  684423 main.go:141] libmachine: (bridge-788750) starting domain...
	I0317 13:59:16.904026  684423 main.go:141] libmachine: (bridge-788750) ensuring networks are active...
	I0317 13:59:16.904037  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:16.904754  684423 main.go:141] libmachine: (bridge-788750) Ensuring network default is active
	I0317 13:59:16.905086  684423 main.go:141] libmachine: (bridge-788750) Ensuring network mk-bridge-788750 is active
	I0317 13:59:16.905562  684423 main.go:141] libmachine: (bridge-788750) getting domain XML...
	I0317 13:59:16.906187  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:18.327351  684423 main.go:141] libmachine: (bridge-788750) waiting for IP...
	I0317 13:59:18.328411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.328897  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.328988  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.328892  684478 retry.go:31] will retry after 281.911181ms: waiting for domain to come up
	I0317 13:59:18.613012  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.613673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.613705  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.613640  684478 retry.go:31] will retry after 285.120088ms: waiting for domain to come up
	I0317 13:59:18.900301  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.900985  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.901010  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.900958  684478 retry.go:31] will retry after 300.755427ms: waiting for domain to come up
	I0317 13:59:19.203685  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.204433  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.204487  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.204404  684478 retry.go:31] will retry after 482.495453ms: waiting for domain to come up
	I0317 13:59:19.688081  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.688673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.688704  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.688626  684478 retry.go:31] will retry after 726.121432ms: waiting for domain to come up
	I0317 13:59:17.651513  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:17.654706  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655140  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:17.655175  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655433  682662 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:17.659262  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:17.670748  682662 kubeadm.go:883] updating cluster {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:17.670853  682662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:17.670896  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:17.702512  682662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:17.702583  682662 ssh_runner.go:195] Run: which lz4
	I0317 13:59:17.706362  682662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:17.710341  682662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:17.710372  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:19.011475  682662 crio.go:462] duration metric: took 1.305154533s to copy over tarball
	I0317 13:59:19.011575  682662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:21.330326  682662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.318697484s)
	I0317 13:59:21.330366  682662 crio.go:469] duration metric: took 2.318859908s to extract the tarball
	I0317 13:59:21.330377  682662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:21.368396  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:21.409403  682662 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:21.409435  682662 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:21.409446  682662 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.32.2 crio true true} ...
	I0317 13:59:21.409567  682662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0317 13:59:21.409728  682662 ssh_runner.go:195] Run: crio config
	I0317 13:59:21.461149  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:21.461173  682662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:21.461196  682662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-788750 NodeName:flannel-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:21.461312  682662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:21.461375  682662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:21.471315  682662 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:21.471401  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:21.480637  682662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:21.497818  682662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:21.514202  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0317 13:59:21.531846  682662 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:21.535852  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:21.547918  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:21.686995  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:21.707033  682662 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750 for IP: 192.168.72.30
	I0317 13:59:21.707066  682662 certs.go:194] generating shared ca certs ...
	I0317 13:59:21.707100  682662 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.707315  682662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:21.707394  682662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:21.707412  682662 certs.go:256] generating profile certs ...
	I0317 13:59:21.707485  682662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key
	I0317 13:59:21.707504  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt with IP's: []
	I0317 13:59:21.991318  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt ...
	I0317 13:59:21.991349  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: {Name:mk98eed9ca2b5d327d7f4f5299f99a2ef0fd27b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991510  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key ...
	I0317 13:59:21.991521  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key: {Name:mkb9d21292c13affabb06e343bb09c1a56eddefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991629  682662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262
	I0317 13:59:21.991650  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.30]
	I0317 13:59:22.386930  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 ...
	I0317 13:59:22.386968  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262: {Name:mk5b32d7f691721ce84195f520653f84677487de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387146  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 ...
	I0317 13:59:22.387165  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262: {Name:mk49fe6f886e1c3fa3806fbf01bfe3f58ce4f93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387271  682662 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt
	I0317 13:59:22.387368  682662 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key
	I0317 13:59:22.387444  682662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key
	I0317 13:59:22.387468  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt with IP's: []
	I0317 13:59:23.150969  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt ...
	I0317 13:59:23.151001  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt: {Name:mke13a7275b9ea4a183b0de420ac1690d8c1d05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151192  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key ...
	I0317 13:59:23.151219  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key: {Name:mk045a87c5e6145ebe19bfd7ec6b3783a3d14258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151427  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:23.151466  682662 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:23.151476  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:23.151497  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:23.151524  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:23.151569  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:23.151609  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:23.152112  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:23.175083  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:23.197285  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:23.225143  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:23.247634  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:23.272656  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:23.306874  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:23.338116  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:23.368010  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:23.394314  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:23.423219  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:23.446176  682662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:23.463202  682662 ssh_runner.go:195] Run: openssl version
	I0317 13:59:23.469202  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:23.482769  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487375  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487441  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.493591  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:23.505891  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:23.518142  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523622  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523688  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.529371  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:23.542335  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:23.553110  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557761  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557819  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.563002  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:23.572975  682662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:23.576788  682662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:23.576843  682662 kubeadm.go:392] StartCluster: {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:23.576909  682662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:23.576950  682662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:23.620638  682662 cri.go:89] found id: ""
	I0317 13:59:23.620723  682662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:23.630796  682662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:23.641676  682662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:23.651990  682662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:23.652016  682662 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:23.652066  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:23.662175  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:23.662253  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:23.673817  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:23.683465  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:23.683547  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:23.695127  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.708536  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:23.708603  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.720370  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:23.729693  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:23.729756  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:23.741346  682662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:23.795773  682662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:23.795912  682662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:23.888818  682662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:23.889041  682662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:23.889172  682662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:23.902164  682662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:20.416846  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.417388  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.417472  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.417375  684478 retry.go:31] will retry after 578.975886ms: waiting for domain to come up
	I0317 13:59:20.998084  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.998743  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.998773  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.998683  684478 retry.go:31] will retry after 1.168593486s: waiting for domain to come up
	I0317 13:59:22.168602  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:22.169205  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:22.169302  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:22.169195  684478 retry.go:31] will retry after 915.875846ms: waiting for domain to come up
	I0317 13:59:23.086435  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:23.086889  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:23.086917  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:23.086855  684478 retry.go:31] will retry after 1.782289012s: waiting for domain to come up
	I0317 13:59:24.872807  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:24.873338  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:24.873403  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:24.873330  684478 retry.go:31] will retry after 2.082516204s: waiting for domain to come up
	I0317 13:59:24.044110  682662 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:24.044245  682662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:24.044341  682662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:24.044450  682662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:24.201837  682662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:24.546018  682662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:24.644028  682662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:24.791251  682662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:24.791629  682662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.148014  682662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:25.148303  682662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.299352  682662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:25.535177  682662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:25.769563  682662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:25.769811  682662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:25.913584  682662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:26.217258  682662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:26.606599  682662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:26.749144  682662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:26.904044  682662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:26.904808  682662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:26.907777  682662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:26.958055  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:26.958771  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:26.958797  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:26.958687  684478 retry.go:31] will retry after 1.918434497s: waiting for domain to come up
	I0317 13:59:28.884652  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:28.884965  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:28.885027  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:28.884968  684478 retry.go:31] will retry after 2.779630313s: waiting for domain to come up
	I0317 13:59:26.909655  682662 out.go:235]   - Booting up control plane ...
	I0317 13:59:26.909809  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:26.909938  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:26.910666  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:26.932841  682662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:26.939466  682662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:26.939639  682662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:27.099640  682662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:27.099824  682662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:28.099988  682662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001156397s
	I0317 13:59:28.100085  682662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:33.100467  682662 kubeadm.go:310] [api-check] The API server is healthy after 5.001005071s
	I0317 13:59:33.110998  682662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:33.130476  682662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:33.152222  682662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:33.152448  682662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:33.165204  682662 kubeadm.go:310] [bootstrap-token] Using token: wul87d.x4r8hdwyi1r15k1o
	I0317 13:59:33.166488  682662 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:33.166623  682662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:33.171916  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:33.180293  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:33.183680  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:33.187126  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:33.198883  682662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:33.509086  682662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:33.946270  682662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:34.504578  682662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:34.505419  682662 kubeadm.go:310] 
	I0317 13:59:34.505481  682662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:34.505490  682662 kubeadm.go:310] 
	I0317 13:59:34.505565  682662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:34.505572  682662 kubeadm.go:310] 
	I0317 13:59:34.505592  682662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:34.505640  682662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:34.505688  682662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:34.505694  682662 kubeadm.go:310] 
	I0317 13:59:34.505736  682662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:34.505761  682662 kubeadm.go:310] 
	I0317 13:59:34.505821  682662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:34.505829  682662 kubeadm.go:310] 
	I0317 13:59:34.505873  682662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:34.505941  682662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:34.506011  682662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:34.506018  682662 kubeadm.go:310] 
	I0317 13:59:34.506093  682662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:34.506173  682662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:34.506184  682662 kubeadm.go:310] 
	I0317 13:59:34.506253  682662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506349  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:34.506371  682662 kubeadm.go:310] 	--control-plane 
	I0317 13:59:34.506377  682662 kubeadm.go:310] 
	I0317 13:59:34.506447  682662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:34.506453  682662 kubeadm.go:310] 
	I0317 13:59:34.506520  682662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506607  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:34.507486  682662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:34.507585  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:34.509544  682662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0317 13:59:31.666320  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:31.666834  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:31.666882  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:31.666805  684478 retry.go:31] will retry after 4.169301354s: waiting for domain to come up
	I0317 13:59:34.510695  682662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 13:59:34.516421  682662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 13:59:34.516443  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0317 13:59:34.533714  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 13:59:34.893048  682662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:34.893132  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:34.893146  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=flannel-788750 minikube.k8s.io/primary=true
	I0317 13:59:34.943374  682662 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:35.063217  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:35.563994  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.063612  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.563720  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.063942  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.564320  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.648519  682662 kubeadm.go:1113] duration metric: took 2.75546083s to wait for elevateKubeSystemPrivileges
	I0317 13:59:37.648579  682662 kubeadm.go:394] duration metric: took 14.071739112s to StartCluster
	I0317 13:59:37.648605  682662 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.648696  682662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:37.649597  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.649855  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:59:37.649879  682662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:37.649947  682662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:59:37.650030  682662 addons.go:69] Setting storage-provisioner=true in profile "flannel-788750"
	I0317 13:59:37.650048  682662 addons.go:238] Setting addon storage-provisioner=true in "flannel-788750"
	I0317 13:59:37.650053  682662 addons.go:69] Setting default-storageclass=true in profile "flannel-788750"
	I0317 13:59:37.650079  682662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-788750"
	I0317 13:59:37.650086  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.650103  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:37.650523  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650547  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650577  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.650701  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.651428  682662 out.go:177] * Verifying Kubernetes components...
	I0317 13:59:37.652854  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:37.666591  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0317 13:59:37.666955  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0317 13:59:37.667168  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667460  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667722  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.667748  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.667987  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.668015  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.668092  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668273  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.668352  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668884  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.668932  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.671496  682662 addons.go:238] Setting addon default-storageclass=true in "flannel-788750"
	I0317 13:59:37.671568  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.671824  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.671869  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.684502  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0317 13:59:37.685086  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.685604  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.685635  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.685998  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.686195  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.687133  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0317 13:59:37.687558  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.687970  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.687999  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.688053  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.688335  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.688773  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.688810  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.689803  682662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:59:37.690875  682662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:37.690892  682662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:59:37.690913  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.694025  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694514  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.694551  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.694967  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.695127  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.695254  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.705449  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0317 13:59:37.705987  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.706407  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.706421  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.706665  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.706825  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.708262  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.708483  682662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:37.708500  682662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:59:37.708528  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.711278  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711693  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.711724  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711884  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.712049  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.712181  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.712280  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.792332  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:59:37.833072  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:38.013202  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:38.016077  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:38.285460  682662 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0317 13:59:38.286336  682662 node_ready.go:35] waiting up to 15m0s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:38.286667  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.286688  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.286982  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287000  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287008  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.287015  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.287297  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287316  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287299  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.336875  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.336910  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.337207  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.337268  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.337215  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.523826  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.523846  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524131  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524148  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.524156  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.524163  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524185  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.524385  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524400  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.525951  682662 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 13:59:35.840720  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:35.841321  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:35.841354  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:35.841303  684478 retry.go:31] will retry after 5.187885311s: waiting for domain to come up
	I0317 13:59:38.527111  682662 addons.go:514] duration metric: took 877.168808ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 13:59:38.789426  682662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-788750" context rescaled to 1 replicas
	I0317 13:59:41.035122  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.035760  684423 main.go:141] libmachine: (bridge-788750) found domain IP: 192.168.39.172
	I0317 13:59:41.035781  684423 main.go:141] libmachine: (bridge-788750) reserving static IP address...
	I0317 13:59:41.035790  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.036284  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find host DHCP lease matching {name: "bridge-788750", mac: "52:54:00:f1:de:c9", ip: "192.168.39.172"} in network mk-bridge-788750
	I0317 13:59:41.115751  684423 main.go:141] libmachine: (bridge-788750) reserved static IP address 192.168.39.172 for domain bridge-788750
	I0317 13:59:41.115782  684423 main.go:141] libmachine: (bridge-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:41.115798  684423 main.go:141] libmachine: (bridge-788750) waiting for SSH...
	I0317 13:59:41.118645  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119016  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.119063  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119199  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH client type: external
	I0317 13:59:41.119225  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa (-rw-------)
	I0317 13:59:41.119256  684423 main.go:141] libmachine: (bridge-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:41.119286  684423 main.go:141] libmachine: (bridge-788750) DBG | About to run SSH command:
	I0317 13:59:41.119302  684423 main.go:141] libmachine: (bridge-788750) DBG | exit 0
	I0317 13:59:41.239165  684423 main.go:141] libmachine: (bridge-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:41.239469  684423 main.go:141] libmachine: (bridge-788750) KVM machine creation complete
	I0317 13:59:41.239768  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:41.240358  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240533  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240709  684423 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:41.240725  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 13:59:41.242592  684423 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:41.242609  684423 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:41.242616  684423 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:41.242621  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.245217  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245580  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.245613  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245737  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.245916  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246031  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246188  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.246355  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.246654  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.246667  684423 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:41.346704  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.346735  684423 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:41.346747  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.349542  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.349892  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.349914  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.350053  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.350256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350419  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350553  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.350715  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.350978  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.350993  684423 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:41.451908  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:41.452000  684423 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:41.452014  684423 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:41.452029  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452354  684423 buildroot.go:166] provisioning hostname "bridge-788750"
	I0317 13:59:41.452380  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452554  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.455040  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455399  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.455424  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455605  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.455777  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.455930  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.456042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.456163  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.456436  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.456451  684423 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-788750 && echo "bridge-788750" | sudo tee /etc/hostname
	I0317 13:59:41.567818  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-788750
	
	I0317 13:59:41.567853  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.570485  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570807  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.570833  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570996  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.571193  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571364  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571484  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.571645  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.571862  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.571897  684423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:41.678869  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.678904  684423 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:41.678929  684423 buildroot.go:174] setting up certificates
	I0317 13:59:41.678941  684423 provision.go:84] configureAuth start
	I0317 13:59:41.678954  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.679256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:41.681754  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682060  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.682086  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682262  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.684392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684679  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.684700  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684844  684423 provision.go:143] copyHostCerts
	I0317 13:59:41.684907  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:41.684932  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:41.685003  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:41.685129  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:41.685142  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:41.685177  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:41.685262  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:41.685272  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:41.685301  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:41.685372  684423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.bridge-788750 san=[127.0.0.1 192.168.39.172 bridge-788750 localhost minikube]
	I0317 13:59:41.821887  684423 provision.go:177] copyRemoteCerts
	I0317 13:59:41.821963  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:41.821998  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.824975  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825287  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.825315  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825479  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.825693  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.825854  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.826011  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:41.905677  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:41.929529  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:41.950905  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 13:59:41.975981  684423 provision.go:87] duration metric: took 297.025637ms to configureAuth
	I0317 13:59:41.976008  684423 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:41.976155  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:41.976223  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.978872  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979159  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.979182  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979352  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.979562  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979759  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979913  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.980059  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.980356  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.980382  684423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:42.210802  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:42.210834  684423 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:42.210843  684423 main.go:141] libmachine: (bridge-788750) Calling .GetURL
	I0317 13:59:42.212236  684423 main.go:141] libmachine: (bridge-788750) DBG | using libvirt version 6000000
	I0317 13:59:42.214601  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.214997  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.215057  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.215186  684423 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:42.215202  684423 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:42.215215  684423 client.go:171] duration metric: took 25.952374084s to LocalClient.Create
	I0317 13:59:42.215251  684423 start.go:167] duration metric: took 25.952448094s to libmachine.API.Create "bridge-788750"
	I0317 13:59:42.215261  684423 start.go:293] postStartSetup for "bridge-788750" (driver="kvm2")
	I0317 13:59:42.215270  684423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:42.215295  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.215556  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:42.215589  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.217971  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218424  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.218456  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218633  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.218799  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.218975  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.219128  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.300906  684423 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:42.304516  684423 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:42.304543  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:42.304605  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:42.304685  684423 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:42.304772  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:42.313026  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:42.333975  684423 start.go:296] duration metric: took 118.700242ms for postStartSetup
	I0317 13:59:42.334033  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:42.334606  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.337068  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337371  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.337392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337630  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:42.337824  684423 start.go:128] duration metric: took 26.101149226s to createHost
	I0317 13:59:42.337851  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.339859  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340209  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.340235  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340363  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.340551  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340698  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340815  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.340963  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:42.341165  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:42.341174  684423 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:42.439850  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219982.415531589
	
	I0317 13:59:42.439872  684423 fix.go:216] guest clock: 1742219982.415531589
	I0317 13:59:42.439880  684423 fix.go:229] Guest: 2025-03-17 13:59:42.415531589 +0000 UTC Remote: 2025-03-17 13:59:42.337836583 +0000 UTC m=+27.394798300 (delta=77.695006ms)
	I0317 13:59:42.439905  684423 fix.go:200] guest clock delta is within tolerance: 77.695006ms
	I0317 13:59:42.439912  684423 start.go:83] releasing machines lock for "bridge-788750", held for 26.203397217s
	I0317 13:59:42.439939  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.440201  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.442831  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443753  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.443782  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443987  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444519  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444688  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444782  684423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:42.444829  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.444939  684423 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:42.444960  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.447411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447758  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.447784  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447802  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447875  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448064  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448237  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448251  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.448269  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.448387  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.448444  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448571  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448710  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448828  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.541855  684423 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:42.548086  684423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:42.702999  684423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:42.708812  684423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:42.708887  684423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:42.723697  684423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:42.723726  684423 start.go:495] detecting cgroup driver to use...
	I0317 13:59:42.723794  684423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:42.739584  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:42.752485  684423 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:42.752559  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:42.765024  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:42.777346  684423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:42.885029  684423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:43.046402  684423 docker.go:233] disabling docker service ...
	I0317 13:59:43.046499  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:43.060044  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:43.072350  684423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:43.187346  684423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:43.322509  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:43.337797  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:43.358051  684423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:43.358120  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.369454  684423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:43.369564  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.381103  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.392551  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.404000  684423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:43.415664  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.426423  684423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.448074  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.458706  684423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:43.470365  684423 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:43.470437  684423 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:43.483041  684423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:43.493515  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:43.630579  684423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:43.729029  684423 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:43.729100  684423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:43.733964  684423 start.go:563] Will wait 60s for crictl version
	I0317 13:59:43.734029  684423 ssh_runner.go:195] Run: which crictl
	I0317 13:59:43.737635  684423 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:43.773498  684423 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:43.773598  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.799596  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.828121  684423 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:43.829699  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:43.832890  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833374  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:43.833402  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833642  684423 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:43.838613  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:43.856973  684423 kubeadm.go:883] updating cluster {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:43.857104  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:43.857172  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:43.890166  684423 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:43.890276  684423 ssh_runner.go:195] Run: which lz4
	I0317 13:59:43.894425  684423 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:43.898332  684423 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:43.898364  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:40.289428  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:42.289498  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:44.290188  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:45.225798  684423 crio.go:462] duration metric: took 1.33140551s to copy over tarball
	I0317 13:59:45.225877  684423 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:47.413334  684423 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187422971s)
	I0317 13:59:47.413377  684423 crio.go:469] duration metric: took 2.187543023s to extract the tarball
	I0317 13:59:47.413388  684423 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:47.449993  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:47.487606  684423 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:47.487630  684423 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:47.487638  684423 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.32.2 crio true true} ...
	I0317 13:59:47.487749  684423 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0317 13:59:47.487816  684423 ssh_runner.go:195] Run: crio config
	I0317 13:59:47.534961  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:47.535001  684423 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:47.535023  684423 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-788750 NodeName:bridge-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:47.535182  684423 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:47.535265  684423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:47.545198  684423 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:47.545288  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:47.554688  684423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:47.570775  684423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:47.586074  684423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0317 13:59:47.601202  684423 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:47.604740  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:47.616014  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:47.728366  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:47.743468  684423 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750 for IP: 192.168.39.172
	I0317 13:59:47.743513  684423 certs.go:194] generating shared ca certs ...
	I0317 13:59:47.743563  684423 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.743737  684423 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:47.743797  684423 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:47.743818  684423 certs.go:256] generating profile certs ...
	I0317 13:59:47.743881  684423 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key
	I0317 13:59:47.743903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt with IP's: []
	I0317 13:59:47.925990  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt ...
	I0317 13:59:47.926022  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: {Name:mk57b03e60343324f33ad0a804eeb5fac91ff61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926184  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key ...
	I0317 13:59:47.926194  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key: {Name:mka3fd5553386d9680255eba9e4b30307d081270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926268  684423 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf
	I0317 13:59:47.926283  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0317 13:59:48.596199  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf ...
	I0317 13:59:48.596251  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf: {Name:mkbe02ed764b875a14246503fcc050fdb71db7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596488  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf ...
	I0317 13:59:48.596518  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf: {Name:mk3fa88c7fab72a1bf633ff2d7f92bde1aceb5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596660  684423 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt
	I0317 13:59:48.596782  684423 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key
	I0317 13:59:48.596878  684423 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key
	I0317 13:59:48.596903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt with IP's: []
	I0317 13:59:48.787513  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt ...
	I0317 13:59:48.787555  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt: {Name:mkd3b1b33b0e3868ee38a25e6cd6690a1040bc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787732  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key ...
	I0317 13:59:48.787744  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key: {Name:mkd529fbd19dbc16b398c1bddab0b44e7d4e1345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787912  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:48.787955  684423 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:48.787965  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:48.787986  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:48.788012  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:48.788046  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:48.788086  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:48.788618  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:48.815047  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:48.837091  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:48.858337  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:48.882013  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:48.903168  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:48.925979  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:48.946611  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:48.970676  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:48.997588  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:49.019322  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:49.041024  684423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:49.056421  684423 ssh_runner.go:195] Run: openssl version
	I0317 13:59:49.061875  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:49.072082  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076316  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076377  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.081812  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:49.092035  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:49.102272  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106676  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106727  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.112133  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:49.121990  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:49.131611  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135725  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135803  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.141146  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:49.151486  684423 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:49.155121  684423 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:49.155172  684423 kubeadm.go:392] StartCluster: {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:49.155238  684423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:49.155277  684423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:49.187716  684423 cri.go:89] found id: ""
	I0317 13:59:49.187787  684423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:49.197392  684423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:49.206456  684423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:49.215648  684423 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:49.215666  684423 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:49.215701  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:49.224457  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:49.224510  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:49.233665  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:49.245183  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:49.245257  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:49.257317  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.269822  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:49.269892  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.281019  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:49.291173  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:49.291250  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:49.303204  684423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:49.350647  684423 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:49.350717  684423 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:49.447801  684423 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:49.447928  684423 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:49.448087  684423 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:49.457405  684423 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:49.542153  684423 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:49.542293  684423 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:49.542375  684423 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:49.722255  684423 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:49.810201  684423 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:45.790781  682662 node_ready.go:49] node "flannel-788750" has status "Ready":"True"
	I0317 13:59:45.790806  682662 node_ready.go:38] duration metric: took 7.504444131s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:45.790816  682662 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:59:45.797709  682662 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 13:59:47.804134  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:50.058796  684423 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:50.325974  684423 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:50.766611  684423 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:50.766821  684423 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:50.962806  684423 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:50.962985  684423 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:51.069262  684423 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:51.154142  684423 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:51.485810  684423 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:51.486035  684423 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:51.589554  684423 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:51.703382  684423 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:51.818706  684423 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:51.939373  684423 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:52.087035  684423 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:52.087704  684423 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:52.090229  684423 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:52.093249  684423 out.go:235]   - Booting up control plane ...
	I0317 13:59:52.093382  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:52.093493  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:52.093923  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:52.111087  684423 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:52.117277  684423 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:52.117337  684423 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:52.258455  684423 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:52.258600  684423 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:53.259182  684423 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00165578s
	I0317 13:59:53.259294  684423 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:50.717873  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:52.802425  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:54.804753  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:57.758540  684423 kubeadm.go:310] [api-check] The API server is healthy after 4.501842676s
	I0317 13:59:57.770918  684423 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:57.784450  684423 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:57.821683  684423 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:57.821916  684423 kubeadm.go:310] [mark-control-plane] Marking the node bridge-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:57.835685  684423 kubeadm.go:310] [bootstrap-token] Using token: 6r2rfy.f4amir38rs4aheab
	I0317 13:59:57.836800  684423 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:57.836921  684423 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:57.842871  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:57.849820  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:57.853155  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:57.856545  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:57.862086  684423 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:58.165290  684423 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:58.587281  684423 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:59.166763  684423 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:59.167778  684423 kubeadm.go:310] 
	I0317 13:59:59.167887  684423 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:59.167901  684423 kubeadm.go:310] 
	I0317 13:59:59.167991  684423 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:59.168016  684423 kubeadm.go:310] 
	I0317 13:59:59.168054  684423 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:59.168111  684423 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:59.168153  684423 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:59.168159  684423 kubeadm.go:310] 
	I0317 13:59:59.168201  684423 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:59.168207  684423 kubeadm.go:310] 
	I0317 13:59:59.168245  684423 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:59.168250  684423 kubeadm.go:310] 
	I0317 13:59:59.168299  684423 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:59.168425  684423 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:59.168502  684423 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:59.168521  684423 kubeadm.go:310] 
	I0317 13:59:59.168648  684423 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:59.168770  684423 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:59.168781  684423 kubeadm.go:310] 
	I0317 13:59:59.168894  684423 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169039  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:59.169080  684423 kubeadm.go:310] 	--control-plane 
	I0317 13:59:59.169096  684423 kubeadm.go:310] 
	I0317 13:59:59.169180  684423 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:59.169193  684423 kubeadm.go:310] 
	I0317 13:59:59.169265  684423 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169358  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:59.170059  684423 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:59.170143  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:59.171940  684423 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:59:59.173180  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:59:59.183169  684423 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:59:59.199645  684423 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:59.199744  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.199776  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=bridge-788750 minikube.k8s.io/primary=true
	I0317 13:59:59.239955  684423 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:59.366223  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.867211  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:57.304825  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:59.803981  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:00.367289  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:00.866372  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.366507  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.866445  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.366509  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.866437  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.367015  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.450761  684423 kubeadm.go:1113] duration metric: took 4.251073397s to wait for elevateKubeSystemPrivileges
	I0317 14:00:03.450805  684423 kubeadm.go:394] duration metric: took 14.295636291s to StartCluster
	I0317 14:00:03.450831  684423 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.450907  684423 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 14:00:03.451925  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.452144  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 14:00:03.452156  684423 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 14:00:03.452210  684423 addons.go:69] Setting storage-provisioner=true in profile "bridge-788750"
	I0317 14:00:03.452229  684423 addons.go:238] Setting addon storage-provisioner=true in "bridge-788750"
	I0317 14:00:03.452140  684423 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 14:00:03.452274  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.452383  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 14:00:03.452233  684423 addons.go:69] Setting default-storageclass=true in profile "bridge-788750"
	I0317 14:00:03.452450  684423 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-788750"
	I0317 14:00:03.452759  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452797  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.452814  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452848  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.454590  684423 out.go:177] * Verifying Kubernetes components...
	I0317 14:00:03.456094  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 14:00:03.468607  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0317 14:00:03.468791  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0317 14:00:03.469225  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469232  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469737  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469751  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.469902  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469927  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.470138  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470336  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470543  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.470733  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.470780  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.474090  684423 addons.go:238] Setting addon default-storageclass=true in "bridge-788750"
	I0317 14:00:03.474136  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.474497  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.474557  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.490576  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0317 14:00:03.491139  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.491781  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.491813  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.492238  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.492487  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.493292  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I0317 14:00:03.493769  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.494289  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.494321  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.494560  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.494679  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.495346  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.495400  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.496399  684423 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 14:00:02.303406  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:03.303401  682662 pod_ready.go:93] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.303427  682662 pod_ready.go:82] duration metric: took 17.505677844s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.303436  682662 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306929  682662 pod_ready.go:93] pod "etcd-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.306947  682662 pod_ready.go:82] duration metric: took 3.50631ms for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306955  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310273  682662 pod_ready.go:93] pod "kube-apiserver-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.310298  682662 pod_ready.go:82] duration metric: took 3.335994ms for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310311  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314183  682662 pod_ready.go:93] pod "kube-controller-manager-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.314198  682662 pod_ready.go:82] duration metric: took 3.880278ms for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314205  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318021  682662 pod_ready.go:93] pod "kube-proxy-drfjv" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.318036  682662 pod_ready.go:82] duration metric: took 3.826269ms for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318043  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702302  682662 pod_ready.go:93] pod "kube-scheduler-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.702331  682662 pod_ready.go:82] duration metric: took 384.281244ms for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702346  682662 pod_ready.go:39] duration metric: took 17.911515691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:03.702367  682662 api_server.go:52] waiting for apiserver process to appear ...
	I0317 14:00:03.702433  682662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 14:00:03.717069  682662 api_server.go:72] duration metric: took 26.067154095s to wait for apiserver process to appear ...
	I0317 14:00:03.717103  682662 api_server.go:88] waiting for apiserver healthz status ...
	I0317 14:00:03.717125  682662 api_server.go:253] Checking apiserver healthz at https://192.168.72.30:8443/healthz ...
	I0317 14:00:03.722046  682662 api_server.go:279] https://192.168.72.30:8443/healthz returned 200:
	ok
	I0317 14:00:03.723179  682662 api_server.go:141] control plane version: v1.32.2
	I0317 14:00:03.723202  682662 api_server.go:131] duration metric: took 6.092065ms to wait for apiserver health ...
	I0317 14:00:03.723210  682662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 14:00:03.901881  682662 system_pods.go:59] 7 kube-system pods found
	I0317 14:00:03.901916  682662 system_pods.go:61] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:03.901922  682662 system_pods.go:61] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:03.901926  682662 system_pods.go:61] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:03.901930  682662 system_pods.go:61] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:03.901934  682662 system_pods.go:61] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:03.901937  682662 system_pods.go:61] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:03.901940  682662 system_pods.go:61] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:03.901947  682662 system_pods.go:74] duration metric: took 178.731729ms to wait for pod list to return data ...
	I0317 14:00:03.901954  682662 default_sa.go:34] waiting for default service account to be created ...
	I0317 14:00:04.103094  682662 default_sa.go:45] found service account: "default"
	I0317 14:00:04.103124  682662 default_sa.go:55] duration metric: took 201.164871ms for default service account to be created ...
	I0317 14:00:04.103135  682662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 14:00:04.301791  682662 system_pods.go:86] 7 kube-system pods found
	I0317 14:00:04.301829  682662 system_pods.go:89] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:04.301836  682662 system_pods.go:89] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:04.301840  682662 system_pods.go:89] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:04.301843  682662 system_pods.go:89] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:04.301847  682662 system_pods.go:89] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:04.301850  682662 system_pods.go:89] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:04.301854  682662 system_pods.go:89] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:04.301864  682662 system_pods.go:126] duration metric: took 198.721059ms to wait for k8s-apps to be running ...
	I0317 14:00:04.301875  682662 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 14:00:04.301935  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 14:00:04.316032  682662 system_svc.go:56] duration metric: took 14.14678ms WaitForService to wait for kubelet
	I0317 14:00:04.316064  682662 kubeadm.go:582] duration metric: took 26.666157602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 14:00:04.316080  682662 node_conditions.go:102] verifying NodePressure condition ...
	I0317 14:00:04.501869  682662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 14:00:04.501907  682662 node_conditions.go:123] node cpu capacity is 2
	I0317 14:00:04.501923  682662 node_conditions.go:105] duration metric: took 185.838303ms to run NodePressure ...
	I0317 14:00:04.501938  682662 start.go:241] waiting for startup goroutines ...
	I0317 14:00:04.501947  682662 start.go:246] waiting for cluster config update ...
	I0317 14:00:04.501961  682662 start.go:255] writing updated cluster config ...
	I0317 14:00:04.502390  682662 ssh_runner.go:195] Run: rm -f paused
	I0317 14:00:04.560415  682662 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 14:00:04.562095  682662 out.go:177] * Done! kubectl is now configured to use "flannel-788750" cluster and "default" namespace by default
	I0317 14:00:03.498107  684423 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:03.498129  684423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 14:00:03.498150  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.501995  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502514  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.502535  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502849  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.503042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.503202  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.503328  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.513291  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0317 14:00:03.513983  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.514587  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.514619  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.515049  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.515268  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.516963  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.517201  684423 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.517224  684423 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 14:00:03.517247  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.520046  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520541  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.520586  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520647  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.520817  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.520958  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.521075  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.660151  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 14:00:03.679967  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 14:00:03.844532  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.876251  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:04.011057  684423 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0317 14:00:04.012032  684423 node_ready.go:35] waiting up to 15m0s for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024328  684423 node_ready.go:49] node "bridge-788750" has status "Ready":"True"
	I0317 14:00:04.024353  684423 node_ready.go:38] duration metric: took 12.290285ms for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024365  684423 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:04.028595  684423 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:04.253238  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253271  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253666  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.253690  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.253696  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.253704  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253714  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253988  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.254006  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.262922  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.262941  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.263255  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.263297  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.263312  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.515692  684423 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-788750" context rescaled to 1 replicas
	I0317 14:00:04.766030  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766064  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766393  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.766450  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766466  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.766480  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766489  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766715  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766733  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.768395  684423 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 14:00:04.769582  684423 addons.go:514] duration metric: took 1.317420787s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 14:00:08.100227  673643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 14:00:08.100326  673643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 14:00:08.101702  673643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 14:00:08.101771  673643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 14:00:08.101843  673643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 14:00:08.101949  673643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 14:00:08.102103  673643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 14:00:08.102213  673643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 14:00:08.103900  673643 out.go:235]   - Generating certificates and keys ...
	I0317 14:00:08.103990  673643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 14:00:08.104047  673643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 14:00:08.104124  673643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 14:00:08.104200  673643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 14:00:08.104303  673643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 14:00:08.104384  673643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 14:00:08.104471  673643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 14:00:08.104558  673643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 14:00:08.104655  673643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 14:00:08.104750  673643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 14:00:08.104799  673643 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 14:00:08.104865  673643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 14:00:08.104953  673643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 14:00:08.105028  673643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 14:00:08.105106  673643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 14:00:08.105200  673643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 14:00:08.105374  673643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 14:00:08.105449  673643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 14:00:08.105497  673643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 14:00:08.105613  673643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 14:00:08.107093  673643 out.go:235]   - Booting up control plane ...
	I0317 14:00:08.107203  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 14:00:08.107321  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 14:00:08.107412  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 14:00:08.107544  673643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 14:00:08.107730  673643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 14:00:08.107811  673643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 14:00:08.107903  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108136  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108241  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108504  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108614  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108874  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108968  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109174  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109230  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109440  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109465  673643 kubeadm.go:310] 
	I0317 14:00:08.109515  673643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 14:00:08.109565  673643 kubeadm.go:310] 		timed out waiting for the condition
	I0317 14:00:08.109575  673643 kubeadm.go:310] 
	I0317 14:00:08.109617  673643 kubeadm.go:310] 	This error is likely caused by:
	I0317 14:00:08.109657  673643 kubeadm.go:310] 		- The kubelet is not running
	I0317 14:00:08.109782  673643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 14:00:08.109799  673643 kubeadm.go:310] 
	I0317 14:00:08.109930  673643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 14:00:08.109984  673643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 14:00:08.110027  673643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 14:00:08.110035  673643 kubeadm.go:310] 
	I0317 14:00:08.110118  673643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 14:00:08.110184  673643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 14:00:08.110190  673643 kubeadm.go:310] 
	I0317 14:00:08.110328  673643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 14:00:08.110435  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 14:00:08.110496  673643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 14:00:08.110562  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 14:00:08.110602  673643 kubeadm.go:310] 
	I0317 14:00:08.110625  673643 kubeadm.go:394] duration metric: took 7m57.828587617s to StartCluster
	I0317 14:00:08.110682  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 14:00:08.110737  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 14:00:08.142741  673643 cri.go:89] found id: ""
	I0317 14:00:08.142781  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.142795  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 14:00:08.142804  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 14:00:08.142877  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 14:00:08.174753  673643 cri.go:89] found id: ""
	I0317 14:00:08.174784  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.174796  673643 logs.go:284] No container was found matching "etcd"
	I0317 14:00:08.174804  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 14:00:08.174859  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 14:00:08.204965  673643 cri.go:89] found id: ""
	I0317 14:00:08.204997  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.205009  673643 logs.go:284] No container was found matching "coredns"
	I0317 14:00:08.205017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 14:00:08.205081  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 14:00:08.235717  673643 cri.go:89] found id: ""
	I0317 14:00:08.235749  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.235757  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 14:00:08.235767  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 14:00:08.235833  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 14:00:08.265585  673643 cri.go:89] found id: ""
	I0317 14:00:08.265613  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.265623  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 14:00:08.265631  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 14:00:08.265718  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 14:00:08.295600  673643 cri.go:89] found id: ""
	I0317 14:00:08.295629  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.295641  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 14:00:08.295648  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 14:00:08.295713  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 14:00:08.327749  673643 cri.go:89] found id: ""
	I0317 14:00:08.327778  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.327787  673643 logs.go:284] No container was found matching "kindnet"
	I0317 14:00:08.327794  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 14:00:08.327855  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 14:00:08.359913  673643 cri.go:89] found id: ""
	I0317 14:00:08.359944  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.359952  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 14:00:08.359962  673643 logs.go:123] Gathering logs for container status ...
	I0317 14:00:08.359975  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 14:00:08.396929  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 14:00:08.396959  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 14:00:08.451498  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 14:00:08.451556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 14:00:08.464742  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 14:00:08.464771  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 14:00:08.537703  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 14:00:08.537733  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 14:00:08.537749  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0317 14:00:08.658936  673643 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 14:00:08.659006  673643 out.go:270] * 
	W0317 14:00:08.659061  673643 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.659074  673643 out.go:270] * 
	W0317 14:00:08.659944  673643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 14:00:08.663521  673643 out.go:201] 
	W0317 14:00:08.664750  673643 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.664794  673643 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 14:00:08.664812  673643 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 14:00:08.666351  673643 out.go:201] 
	I0317 14:00:06.033655  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:08.034208  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:10.034443  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:12.534251  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:14.536107  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:15.534010  684423 pod_ready.go:98] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.172 HostIPs:[{IP:192.168.39
.172}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-03-17 14:00:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-03-17 14:00:04 +0000 UTC,FinishedAt:2025-03-17 14:00:14 +0000 UTC,ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671 Started:0xc0020858f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002254d20} {Name:kube-api-access-lhzw4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002254d30}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0317 14:00:15.534037  684423 pod_ready.go:82] duration metric: took 11.50541352s for pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace to be "Ready" ...
	E0317 14:00:15.534049  684423 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.172 HostIPs:[{IP:192.168.39.172}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-03-17 14:00:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-03-17 14:00:04 +0000 UTC,FinishedAt:2025-03-17 14:00:14 +0000 UTC,ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671 Started:0xc0020858f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002254d20} {Name:kube-api-access-lhzw4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc002254d30}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0317 14:00:15.534059  684423 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.537848  684423 pod_ready.go:93] pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.537881  684423 pod_ready.go:82] duration metric: took 3.813823ms for pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.537896  684423 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.540753  684423 pod_ready.go:93] pod "etcd-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.540773  684423 pod_ready.go:82] duration metric: took 2.869364ms for pod "etcd-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.540784  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.543734  684423 pod_ready.go:93] pod "kube-apiserver-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.543759  684423 pod_ready.go:82] duration metric: took 2.967445ms for pod "kube-apiserver-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.543771  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.546758  684423 pod_ready.go:93] pod "kube-controller-manager-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.546781  684423 pod_ready.go:82] duration metric: took 3.002194ms for pod "kube-controller-manager-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.546792  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kj4kx" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.933502  684423 pod_ready.go:93] pod "kube-proxy-kj4kx" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.933536  684423 pod_ready.go:82] duration metric: took 386.736479ms for pod "kube-proxy-kj4kx" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.933546  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:16.331873  684423 pod_ready.go:93] pod "kube-scheduler-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:16.331907  684423 pod_ready.go:82] duration metric: took 398.352288ms for pod "kube-scheduler-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:16.331919  684423 pod_ready.go:39] duration metric: took 12.307539787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:16.331944  684423 api_server.go:52] waiting for apiserver process to appear ...
	I0317 14:00:16.332022  684423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 14:00:16.346729  684423 api_server.go:72] duration metric: took 12.894460738s to wait for apiserver process to appear ...
	I0317 14:00:16.346763  684423 api_server.go:88] waiting for apiserver healthz status ...
	I0317 14:00:16.346789  684423 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0317 14:00:16.351864  684423 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0317 14:00:16.352901  684423 api_server.go:141] control plane version: v1.32.2
	I0317 14:00:16.352933  684423 api_server.go:131] duration metric: took 6.160385ms to wait for apiserver health ...
	I0317 14:00:16.352944  684423 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 14:00:16.532789  684423 system_pods.go:59] 7 kube-system pods found
	I0317 14:00:16.532831  684423 system_pods.go:61] "coredns-668d6bf9bc-r8ngr" [4745e44f-92f0-4965-8322-547a570326b8] Running
	I0317 14:00:16.532839  684423 system_pods.go:61] "etcd-bridge-788750" [3c21b3bd-7442-4e7d-b74f-d67a6e8f0094] Running
	I0317 14:00:16.532845  684423 system_pods.go:61] "kube-apiserver-bridge-788750" [ae0d5960-d302-4928-90e0-83b4938145c2] Running
	I0317 14:00:16.532851  684423 system_pods.go:61] "kube-controller-manager-bridge-788750" [b150c081-ba22-45da-b020-0e38fbb646b8] Running
	I0317 14:00:16.532856  684423 system_pods.go:61] "kube-proxy-kj4kx" [dd396806-07b3-4394-9ac2-038bbadaad2d] Running
	I0317 14:00:16.532862  684423 system_pods.go:61] "kube-scheduler-bridge-788750" [e7e68119-8c5c-4a4d-bd23-a64d3dbee81a] Running
	I0317 14:00:16.532867  684423 system_pods.go:61] "storage-provisioner" [44001b48-9220-4402-9fb4-1c662a5d512e] Running
	I0317 14:00:16.532874  684423 system_pods.go:74] duration metric: took 179.923311ms to wait for pod list to return data ...
	I0317 14:00:16.532887  684423 default_sa.go:34] waiting for default service account to be created ...
	I0317 14:00:16.732122  684423 default_sa.go:45] found service account: "default"
	I0317 14:00:16.732153  684423 default_sa.go:55] duration metric: took 199.259905ms for default service account to be created ...
	I0317 14:00:16.732164  684423 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 14:00:16.932578  684423 system_pods.go:86] 7 kube-system pods found
	I0317 14:00:16.932613  684423 system_pods.go:89] "coredns-668d6bf9bc-r8ngr" [4745e44f-92f0-4965-8322-547a570326b8] Running
	I0317 14:00:16.932620  684423 system_pods.go:89] "etcd-bridge-788750" [3c21b3bd-7442-4e7d-b74f-d67a6e8f0094] Running
	I0317 14:00:16.932626  684423 system_pods.go:89] "kube-apiserver-bridge-788750" [ae0d5960-d302-4928-90e0-83b4938145c2] Running
	I0317 14:00:16.932629  684423 system_pods.go:89] "kube-controller-manager-bridge-788750" [b150c081-ba22-45da-b020-0e38fbb646b8] Running
	I0317 14:00:16.932633  684423 system_pods.go:89] "kube-proxy-kj4kx" [dd396806-07b3-4394-9ac2-038bbadaad2d] Running
	I0317 14:00:16.932637  684423 system_pods.go:89] "kube-scheduler-bridge-788750" [e7e68119-8c5c-4a4d-bd23-a64d3dbee81a] Running
	I0317 14:00:16.932642  684423 system_pods.go:89] "storage-provisioner" [44001b48-9220-4402-9fb4-1c662a5d512e] Running
	I0317 14:00:16.932652  684423 system_pods.go:126] duration metric: took 200.479611ms to wait for k8s-apps to be running ...
	I0317 14:00:16.932663  684423 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 14:00:16.932722  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 14:00:16.946942  684423 system_svc.go:56] duration metric: took 14.265343ms WaitForService to wait for kubelet
	I0317 14:00:16.946983  684423 kubeadm.go:582] duration metric: took 13.494722207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 14:00:16.947005  684423 node_conditions.go:102] verifying NodePressure condition ...
	I0317 14:00:17.132068  684423 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 14:00:17.132107  684423 node_conditions.go:123] node cpu capacity is 2
	I0317 14:00:17.132125  684423 node_conditions.go:105] duration metric: took 185.114846ms to run NodePressure ...
	I0317 14:00:17.132143  684423 start.go:241] waiting for startup goroutines ...
	I0317 14:00:17.132153  684423 start.go:246] waiting for cluster config update ...
	I0317 14:00:17.132170  684423 start.go:255] writing updated cluster config ...
	I0317 14:00:17.132492  684423 ssh_runner.go:195] Run: rm -f paused
	I0317 14:00:17.180890  684423 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 14:00:17.183880  684423 out.go:177] * Done! kubectl is now configured to use "bridge-788750" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.127534419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220551127514377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=308ed0df-dec0-43b4-bff9-8c321d18c8fe name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.128877637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9bef8fa-85a2-4dca-902e-cb48b05fb48a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.128924206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9bef8fa-85a2-4dca-902e-cb48b05fb48a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.128964240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9bef8fa-85a2-4dca-902e-cb48b05fb48a name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.157279294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27160953-b7ca-403b-9efd-b08f25b0e2c7 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.157360916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27160953-b7ca-403b-9efd-b08f25b0e2c7 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.158250345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5b9f8c9-094f-4c39-984e-713bee299a58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.158695181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220551158673845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5b9f8c9-094f-4c39-984e-713bee299a58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.159228665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f82e2c6-e40f-4ee2-85c9-ebc38e5c29f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.159286262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f82e2c6-e40f-4ee2-85c9-ebc38e5c29f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.159322609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3f82e2c6-e40f-4ee2-85c9-ebc38e5c29f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.187278434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa4d81a5-658a-4627-baf2-1f5dbbf89ea4 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.187361564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa4d81a5-658a-4627-baf2-1f5dbbf89ea4 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.188315276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d449c2d4-255d-4e00-a9d0-a48ada7cff1a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.188719670Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220551188699422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d449c2d4-255d-4e00-a9d0-a48ada7cff1a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.189198737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3356e1e3-2319-4045-9ad4-5715faa7bde2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.189259711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3356e1e3-2319-4045-9ad4-5715faa7bde2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.189293252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3356e1e3-2319-4045-9ad4-5715faa7bde2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.219816524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3aec2176-c834-4ec2-8fd2-47887141dc39 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.219886197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3aec2176-c834-4ec2-8fd2-47887141dc39 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.220767238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82177249-d387-43dc-9a1d-9ca99822a503 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.221122217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220551221100708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82177249-d387-43dc-9a1d-9ca99822a503 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.221603442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d6e6c9-ddd5-4a6b-8dbd-f2cd7f3d94c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.221664376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d6e6c9-ddd5-4a6b-8dbd-f2cd7f3d94c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:09:11 old-k8s-version-803027 crio[632]: time="2025-03-17 14:09:11.221698777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c0d6e6c9-ddd5-4a6b-8dbd-f2cd7f3d94c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar17 13:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049623] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037597] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.930656] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar17 13:52] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.058241] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060217] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.180401] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.139897] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.253988] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.875702] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060155] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.298688] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.779171] kauditd_printk_skb: 46 callbacks suppressed
	[Mar17 13:56] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Mar17 13:58] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.074533] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:09:11 up 17 min,  0 users,  load average: 0.02, 0.05, 0.04
	Linux old-k8s-version-803027 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000ba0510)
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: goroutine 153 [select]:
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00007aef0, 0x4f0ac20, 0xc000b9c370, 0x1, 0xc0001020c0)
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d9180, 0xc0001020c0)
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba2250, 0xc0009f1220)
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 17 14:09:09 old-k8s-version-803027 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 17 14:09:09 old-k8s-version-803027 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 17 14:09:09 old-k8s-version-803027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 17 14:09:09 old-k8s-version-803027 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 17 14:09:09 old-k8s-version-803027 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6489]: I0317 14:09:09.806827    6489 server.go:416] Version: v1.20.0
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6489]: I0317 14:09:09.807060    6489 server.go:837] Client rotation is on, will bootstrap in background
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6489]: I0317 14:09:09.808829    6489 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6489]: W0317 14:09:09.809721    6489 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 17 14:09:09 old-k8s-version-803027 kubelet[6489]: I0317 14:09:09.809748    6489 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (235.09333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-803027" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:09:15.390508  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:09:34.615404  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:10:04.591153  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:10:17.594294  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:10:32.295931  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:10:45.295662  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:11:47.531334  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:12:03.782372  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/auto-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:12:12.573114  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:12:19.525838  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:12:38.185796  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/kindnet-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:13:15.512146  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/calico-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:13:42.592086  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:13:44.452059  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:13:47.686801  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/custom-flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:14:04.111828  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
E0317 14:14:06.911099  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/enable-default-cni-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.229:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.229:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (228.492174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-803027" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-803027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-803027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.758µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-803027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (214.780511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-803027 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-788750 sudo iptables                       | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo docker                         | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo cat                            | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo                                | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo find                           | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-788750 sudo crio                           | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-788750                                     | bridge-788750 | jenkins | v1.35.0 | 17 Mar 25 14:00 UTC | 17 Mar 25 14:00 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:59:14
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:59:14.981692  684423 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:59:14.981852  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.981867  684423 out.go:358] Setting ErrFile to fd 2...
	I0317 13:59:14.981874  684423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:59:14.982141  684423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:59:14.982809  684423 out.go:352] Setting JSON to false
	I0317 13:59:14.984111  684423 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13299,"bootTime":1742206656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:59:14.984207  684423 start.go:139] virtualization: kvm guest
	I0317 13:59:14.986343  684423 out.go:177] * [bridge-788750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:59:14.987706  684423 notify.go:220] Checking for updates...
	I0317 13:59:14.987715  684423 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:59:14.989330  684423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:59:14.990916  684423 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:14.992287  684423 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:14.993610  684423 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:59:14.995116  684423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:59:14.997007  684423 config.go:182] Loaded profile config "enable-default-cni-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997099  684423 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:14.997181  684423 config.go:182] Loaded profile config "old-k8s-version-803027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:59:14.997265  684423 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:59:15.035775  684423 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:59:14.820648  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821374  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has current primary IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.821404  682662 main.go:141] libmachine: (flannel-788750) found domain IP: 192.168.72.30
	I0317 13:59:14.821418  682662 main.go:141] libmachine: (flannel-788750) reserving static IP address...
	I0317 13:59:14.821957  682662 main.go:141] libmachine: (flannel-788750) DBG | unable to find host DHCP lease matching {name: "flannel-788750", mac: "52:54:00:55:e8:19", ip: "192.168.72.30"} in network mk-flannel-788750
	I0317 13:59:14.906769  682662 main.go:141] libmachine: (flannel-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:14.906805  682662 main.go:141] libmachine: (flannel-788750) reserved static IP address 192.168.72.30 for domain flannel-788750
	I0317 13:59:14.906819  682662 main.go:141] libmachine: (flannel-788750) waiting for SSH...
	I0317 13:59:14.909743  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910088  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:14.910120  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:14.910299  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH client type: external
	I0317 13:59:14.910327  682662 main.go:141] libmachine: (flannel-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa (-rw-------)
	I0317 13:59:14.910360  682662 main.go:141] libmachine: (flannel-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:14.910373  682662 main.go:141] libmachine: (flannel-788750) DBG | About to run SSH command:
	I0317 13:59:14.910388  682662 main.go:141] libmachine: (flannel-788750) DBG | exit 0
	I0317 13:59:15.039803  682662 main.go:141] libmachine: (flannel-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:15.040031  682662 main.go:141] libmachine: (flannel-788750) KVM machine creation complete
	I0317 13:59:15.040330  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:15.040923  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041146  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.041319  682662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:15.041338  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:15.037243  684423 start.go:297] selected driver: kvm2
	I0317 13:59:15.037267  684423 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:59:15.037287  684423 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:59:15.038541  684423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.038644  684423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 13:59:15.057562  684423 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 13:59:15.057627  684423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:59:15.057863  684423 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:59:15.057895  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:15.057901  684423 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:59:15.057945  684423 start.go:340] cluster config:
	{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:15.058020  684423 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:59:15.059766  684423 out.go:177] * Starting "bridge-788750" primary control-plane node in "bridge-788750" cluster
	I0317 13:59:15.061061  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:15.061110  684423 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0317 13:59:15.061133  684423 cache.go:56] Caching tarball of preloaded images
	I0317 13:59:15.061226  684423 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0317 13:59:15.061242  684423 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0317 13:59:15.061359  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:15.061391  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json: {Name:mkeb86f621957feb90cebae88f4bfc025146aa69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:15.061584  684423 start.go:360] acquireMachinesLock for bridge-788750: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:59:16.236468  684423 start.go:364] duration metric: took 1.174838631s to acquireMachinesLock for "bridge-788750"
	I0317 13:59:16.236553  684423 start.go:93] Provisioning new machine with config: &{Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:16.236662  684423 start.go:125] createHost starting for "" (driver="kvm2")
	I0317 13:59:15.042960  682662 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:15.042977  682662 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:15.042984  682662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:15.042994  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.046053  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046440  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.046460  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.046654  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.047369  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047564  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.047723  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.047905  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.048115  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.048125  682662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:15.155136  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.155161  682662 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:15.155171  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.157989  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158314  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.158344  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.158604  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.158819  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.158982  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.159164  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.159287  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.159569  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.159584  682662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:15.263937  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:15.264006  682662 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:15.264013  682662 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:15.264021  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264323  682662 buildroot.go:166] provisioning hostname "flannel-788750"
	I0317 13:59:15.264358  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.264595  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.267397  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.267894  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.267919  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.268123  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.268363  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268540  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.268702  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.268870  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.269106  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.269121  682662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-788750 && echo "flannel-788750" | sudo tee /etc/hostname
	I0317 13:59:15.393761  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-788750
	
	I0317 13:59:15.393795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.396701  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397053  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.397079  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.397315  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.397527  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397685  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.397812  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.397956  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.398219  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.398235  682662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:15.512038  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:15.512072  682662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:15.512093  682662 buildroot.go:174] setting up certificates
	I0317 13:59:15.512102  682662 provision.go:84] configureAuth start
	I0317 13:59:15.512110  682662 main.go:141] libmachine: (flannel-788750) Calling .GetMachineName
	I0317 13:59:15.512392  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:15.515143  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515466  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.515492  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.515711  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.517703  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.517986  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.518013  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.518136  682662 provision.go:143] copyHostCerts
	I0317 13:59:15.518194  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:15.518211  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:15.518281  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:15.518370  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:15.518378  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:15.518401  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:15.518451  682662 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:15.518459  682662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:15.518487  682662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:15.518537  682662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.flannel-788750 san=[127.0.0.1 192.168.72.30 flannel-788750 localhost minikube]
	I0317 13:59:15.606367  682662 provision.go:177] copyRemoteCerts
	I0317 13:59:15.606436  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:15.606478  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.608965  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609288  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.609320  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.609467  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.609677  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.609868  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.610035  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:15.692959  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:15.715060  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0317 13:59:15.736168  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:15.757347  682662 provision.go:87] duration metric: took 245.231065ms to configureAuth
	I0317 13:59:15.757375  682662 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:15.757523  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:15.757599  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.760083  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760447  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.760473  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.760703  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.760886  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761040  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.761189  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.761364  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:15.761619  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:15.761640  682662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:15.989797  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:15.989831  682662 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:15.989841  682662 main.go:141] libmachine: (flannel-788750) Calling .GetURL
	I0317 13:59:15.991175  682662 main.go:141] libmachine: (flannel-788750) DBG | using libvirt version 6000000
	I0317 13:59:15.993619  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.993970  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.993998  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.994173  682662 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:15.994189  682662 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:15.994198  682662 client.go:171] duration metric: took 25.832600711s to LocalClient.Create
	I0317 13:59:15.994227  682662 start.go:167] duration metric: took 25.832673652s to libmachine.API.Create "flannel-788750"
	I0317 13:59:15.994239  682662 start.go:293] postStartSetup for "flannel-788750" (driver="kvm2")
	I0317 13:59:15.994255  682662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:15.994280  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:15.994552  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:15.994591  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:15.996836  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997188  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:15.997218  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:15.997354  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:15.997523  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:15.997708  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:15.997830  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.082655  682662 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:16.086465  682662 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:16.086500  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:16.086557  682662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:16.086623  682662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:16.086707  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:16.096327  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:16.120987  682662 start.go:296] duration metric: took 126.730504ms for postStartSetup
	I0317 13:59:16.121051  682662 main.go:141] libmachine: (flannel-788750) Calling .GetConfigRaw
	I0317 13:59:16.121795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.124252  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124669  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.124695  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.124960  682662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/config.json ...
	I0317 13:59:16.125174  682662 start.go:128] duration metric: took 25.986754439s to createHost
	I0317 13:59:16.125209  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.127973  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128376  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.128405  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.128538  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.128709  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.128874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.129023  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.129206  682662 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:16.129486  682662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0317 13:59:16.129501  682662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:16.236319  682662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219956.214556978
	
	I0317 13:59:16.236345  682662 fix.go:216] guest clock: 1742219956.214556978
	I0317 13:59:16.236353  682662 fix.go:229] Guest: 2025-03-17 13:59:16.214556978 +0000 UTC Remote: 2025-03-17 13:59:16.125191891 +0000 UTC m=+26.132597802 (delta=89.365087ms)
	I0317 13:59:16.236374  682662 fix.go:200] guest clock delta is within tolerance: 89.365087ms
	I0317 13:59:16.236379  682662 start.go:83] releasing machines lock for "flannel-788750", held for 26.098086792s
	I0317 13:59:16.236406  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.236717  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:16.240150  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.242931  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.242954  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.243184  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.243857  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247621  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:16.247686  682662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:16.247747  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.247854  682662 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:16.247879  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:16.251119  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251267  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251402  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251424  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251567  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:16.251590  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:16.251600  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251792  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:16.251874  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.251958  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:16.252029  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252213  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.252268  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:16.252413  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:16.336552  682662 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:16.372394  682662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:16.543479  682662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:16.549196  682662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:16.549278  682662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:16.567894  682662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:16.567923  682662 start.go:495] detecting cgroup driver to use...
	I0317 13:59:16.568007  682662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:16.591718  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:16.606627  682662 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:16.606699  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:16.620043  682662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:16.635200  682662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:16.752393  682662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:16.899089  682662 docker.go:233] disabling docker service ...
	I0317 13:59:16.899148  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:16.914164  682662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:16.928117  682662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:17.053498  682662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:17.189186  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:17.203833  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:17.223316  682662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:17.223397  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.233530  682662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:17.233601  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.243490  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.253607  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.263744  682662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:17.274183  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.287378  682662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.303360  682662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:17.313576  682662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:17.322490  682662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:17.322555  682662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:17.336395  682662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:17.345254  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:17.458590  682662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:17.543773  682662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:17.543842  682662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:17.548368  682662 start.go:563] Will wait 60s for crictl version
	I0317 13:59:17.548436  682662 ssh_runner.go:195] Run: which crictl
	I0317 13:59:17.552779  682662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:17.595329  682662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:17.595419  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.621136  682662 ssh_runner.go:195] Run: crio --version
	I0317 13:59:17.650209  682662 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:16.239781  684423 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0317 13:59:16.239987  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:16.240028  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:16.260585  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0317 13:59:16.261043  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:16.261626  684423 main.go:141] libmachine: Using API Version  1
	I0317 13:59:16.261650  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:16.262203  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:16.262429  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:16.262618  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:16.262805  684423 start.go:159] libmachine.API.Create for "bridge-788750" (driver="kvm2")
	I0317 13:59:16.262832  684423 client.go:168] LocalClient.Create starting
	I0317 13:59:16.262873  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
	I0317 13:59:16.262914  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.262936  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263026  684423 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
	I0317 13:59:16.263055  684423 main.go:141] libmachine: Decoding PEM data...
	I0317 13:59:16.263627  684423 main.go:141] libmachine: Parsing certificate...
	I0317 13:59:16.263688  684423 main.go:141] libmachine: Running pre-create checks...
	I0317 13:59:16.263699  684423 main.go:141] libmachine: (bridge-788750) Calling .PreCreateCheck
	I0317 13:59:16.265317  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:16.266685  684423 main.go:141] libmachine: Creating machine...
	I0317 13:59:16.266703  684423 main.go:141] libmachine: (bridge-788750) Calling .Create
	I0317 13:59:16.266873  684423 main.go:141] libmachine: (bridge-788750) creating KVM machine...
	I0317 13:59:16.266894  684423 main.go:141] libmachine: (bridge-788750) creating network...
	I0317 13:59:16.268321  684423 main.go:141] libmachine: (bridge-788750) DBG | found existing default KVM network
	I0317 13:59:16.270323  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.270123  684478 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013de0}
	I0317 13:59:16.270347  684423 main.go:141] libmachine: (bridge-788750) DBG | created network xml: 
	I0317 13:59:16.270356  684423 main.go:141] libmachine: (bridge-788750) DBG | <network>
	I0317 13:59:16.270365  684423 main.go:141] libmachine: (bridge-788750) DBG |   <name>mk-bridge-788750</name>
	I0317 13:59:16.270372  684423 main.go:141] libmachine: (bridge-788750) DBG |   <dns enable='no'/>
	I0317 13:59:16.270379  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270388  684423 main.go:141] libmachine: (bridge-788750) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0317 13:59:16.270396  684423 main.go:141] libmachine: (bridge-788750) DBG |     <dhcp>
	I0317 13:59:16.270404  684423 main.go:141] libmachine: (bridge-788750) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0317 13:59:16.270412  684423 main.go:141] libmachine: (bridge-788750) DBG |     </dhcp>
	I0317 13:59:16.270418  684423 main.go:141] libmachine: (bridge-788750) DBG |   </ip>
	I0317 13:59:16.270426  684423 main.go:141] libmachine: (bridge-788750) DBG |   
	I0317 13:59:16.270432  684423 main.go:141] libmachine: (bridge-788750) DBG | </network>
	I0317 13:59:16.270440  684423 main.go:141] libmachine: (bridge-788750) DBG | 
	I0317 13:59:16.276393  684423 main.go:141] libmachine: (bridge-788750) DBG | trying to create private KVM network mk-bridge-788750 192.168.39.0/24...
	I0317 13:59:16.361973  684423 main.go:141] libmachine: (bridge-788750) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.362009  684423 main.go:141] libmachine: (bridge-788750) DBG | private KVM network mk-bridge-788750 192.168.39.0/24 created
	I0317 13:59:16.362022  684423 main.go:141] libmachine: (bridge-788750) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 13:59:16.362044  684423 main.go:141] libmachine: (bridge-788750) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:59:16.362105  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.359405  684478 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.657775  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.657652  684478 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa...
	I0317 13:59:16.896870  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896712  684478 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk...
	I0317 13:59:16.896904  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing magic tar header
	I0317 13:59:16.896919  684423 main.go:141] libmachine: (bridge-788750) DBG | Writing SSH key tar header
	I0317 13:59:16.896931  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:16.896829  684478 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 ...
	I0317 13:59:16.896949  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750
	I0317 13:59:16.896963  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750 (perms=drwx------)
	I0317 13:59:16.896975  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
	I0317 13:59:16.896989  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:59:16.897000  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
	I0317 13:59:16.897011  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
	I0317 13:59:16.897027  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
	I0317 13:59:16.897040  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
	I0317 13:59:16.897049  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0317 13:59:16.897059  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home/jenkins
	I0317 13:59:16.897070  684423 main.go:141] libmachine: (bridge-788750) DBG | checking permissions on dir: /home
	I0317 13:59:16.897081  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0317 13:59:16.897102  684423 main.go:141] libmachine: (bridge-788750) DBG | skipping /home - not owner
	I0317 13:59:16.897114  684423 main.go:141] libmachine: (bridge-788750) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0317 13:59:16.897128  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:16.898240  684423 main.go:141] libmachine: (bridge-788750) define libvirt domain using xml: 
	I0317 13:59:16.898267  684423 main.go:141] libmachine: (bridge-788750) <domain type='kvm'>
	I0317 13:59:16.898276  684423 main.go:141] libmachine: (bridge-788750)   <name>bridge-788750</name>
	I0317 13:59:16.898286  684423 main.go:141] libmachine: (bridge-788750)   <memory unit='MiB'>3072</memory>
	I0317 13:59:16.898328  684423 main.go:141] libmachine: (bridge-788750)   <vcpu>2</vcpu>
	I0317 13:59:16.898361  684423 main.go:141] libmachine: (bridge-788750)   <features>
	I0317 13:59:16.898371  684423 main.go:141] libmachine: (bridge-788750)     <acpi/>
	I0317 13:59:16.898379  684423 main.go:141] libmachine: (bridge-788750)     <apic/>
	I0317 13:59:16.898391  684423 main.go:141] libmachine: (bridge-788750)     <pae/>
	I0317 13:59:16.898401  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898409  684423 main.go:141] libmachine: (bridge-788750)   </features>
	I0317 13:59:16.898419  684423 main.go:141] libmachine: (bridge-788750)   <cpu mode='host-passthrough'>
	I0317 13:59:16.898428  684423 main.go:141] libmachine: (bridge-788750)   
	I0317 13:59:16.898436  684423 main.go:141] libmachine: (bridge-788750)   </cpu>
	I0317 13:59:16.898444  684423 main.go:141] libmachine: (bridge-788750)   <os>
	I0317 13:59:16.898452  684423 main.go:141] libmachine: (bridge-788750)     <type>hvm</type>
	I0317 13:59:16.898460  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='cdrom'/>
	I0317 13:59:16.898470  684423 main.go:141] libmachine: (bridge-788750)     <boot dev='hd'/>
	I0317 13:59:16.898477  684423 main.go:141] libmachine: (bridge-788750)     <bootmenu enable='no'/>
	I0317 13:59:16.898485  684423 main.go:141] libmachine: (bridge-788750)   </os>
	I0317 13:59:16.898492  684423 main.go:141] libmachine: (bridge-788750)   <devices>
	I0317 13:59:16.898506  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='cdrom'>
	I0317 13:59:16.898519  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/boot2docker.iso'/>
	I0317 13:59:16.898532  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hdc' bus='scsi'/>
	I0317 13:59:16.898550  684423 main.go:141] libmachine: (bridge-788750)       <readonly/>
	I0317 13:59:16.898559  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898568  684423 main.go:141] libmachine: (bridge-788750)     <disk type='file' device='disk'>
	I0317 13:59:16.898584  684423 main.go:141] libmachine: (bridge-788750)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0317 13:59:16.898615  684423 main.go:141] libmachine: (bridge-788750)       <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/bridge-788750.rawdisk'/>
	I0317 13:59:16.898627  684423 main.go:141] libmachine: (bridge-788750)       <target dev='hda' bus='virtio'/>
	I0317 13:59:16.898636  684423 main.go:141] libmachine: (bridge-788750)     </disk>
	I0317 13:59:16.898646  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898676  684423 main.go:141] libmachine: (bridge-788750)       <source network='mk-bridge-788750'/>
	I0317 13:59:16.898700  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898709  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898721  684423 main.go:141] libmachine: (bridge-788750)     <interface type='network'>
	I0317 13:59:16.898738  684423 main.go:141] libmachine: (bridge-788750)       <source network='default'/>
	I0317 13:59:16.898748  684423 main.go:141] libmachine: (bridge-788750)       <model type='virtio'/>
	I0317 13:59:16.898763  684423 main.go:141] libmachine: (bridge-788750)     </interface>
	I0317 13:59:16.898776  684423 main.go:141] libmachine: (bridge-788750)     <serial type='pty'>
	I0317 13:59:16.898787  684423 main.go:141] libmachine: (bridge-788750)       <target port='0'/>
	I0317 13:59:16.898794  684423 main.go:141] libmachine: (bridge-788750)     </serial>
	I0317 13:59:16.898802  684423 main.go:141] libmachine: (bridge-788750)     <console type='pty'>
	I0317 13:59:16.898813  684423 main.go:141] libmachine: (bridge-788750)       <target type='serial' port='0'/>
	I0317 13:59:16.898819  684423 main.go:141] libmachine: (bridge-788750)     </console>
	I0317 13:59:16.898831  684423 main.go:141] libmachine: (bridge-788750)     <rng model='virtio'>
	I0317 13:59:16.898839  684423 main.go:141] libmachine: (bridge-788750)       <backend model='random'>/dev/random</backend>
	I0317 13:59:16.898851  684423 main.go:141] libmachine: (bridge-788750)     </rng>
	I0317 13:59:16.898874  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898906  684423 main.go:141] libmachine: (bridge-788750)     
	I0317 13:59:16.898924  684423 main.go:141] libmachine: (bridge-788750)   </devices>
	I0317 13:59:16.898943  684423 main.go:141] libmachine: (bridge-788750) </domain>
	I0317 13:59:16.898963  684423 main.go:141] libmachine: (bridge-788750) 
	I0317 13:59:16.903437  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:d3:5c:cd in network default
	I0317 13:59:16.904002  684423 main.go:141] libmachine: (bridge-788750) starting domain...
	I0317 13:59:16.904026  684423 main.go:141] libmachine: (bridge-788750) ensuring networks are active...
	I0317 13:59:16.904037  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:16.904754  684423 main.go:141] libmachine: (bridge-788750) Ensuring network default is active
	I0317 13:59:16.905086  684423 main.go:141] libmachine: (bridge-788750) Ensuring network mk-bridge-788750 is active
	I0317 13:59:16.905562  684423 main.go:141] libmachine: (bridge-788750) getting domain XML...
	I0317 13:59:16.906187  684423 main.go:141] libmachine: (bridge-788750) creating domain...
	I0317 13:59:18.327351  684423 main.go:141] libmachine: (bridge-788750) waiting for IP...
	I0317 13:59:18.328411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.328897  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.328988  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.328892  684478 retry.go:31] will retry after 281.911181ms: waiting for domain to come up
	I0317 13:59:18.613012  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.613673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.613705  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.613640  684478 retry.go:31] will retry after 285.120088ms: waiting for domain to come up
	I0317 13:59:18.900301  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:18.900985  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:18.901010  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:18.900958  684478 retry.go:31] will retry after 300.755427ms: waiting for domain to come up
	I0317 13:59:19.203685  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.204433  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.204487  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.204404  684478 retry.go:31] will retry after 482.495453ms: waiting for domain to come up
	I0317 13:59:19.688081  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:19.688673  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:19.688704  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:19.688626  684478 retry.go:31] will retry after 726.121432ms: waiting for domain to come up
	I0317 13:59:17.651513  682662 main.go:141] libmachine: (flannel-788750) Calling .GetIP
	I0317 13:59:17.654706  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655140  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:17.655175  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:17.655433  682662 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:17.659262  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:17.670748  682662 kubeadm.go:883] updating cluster {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:17.670853  682662 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:17.670896  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:17.702512  682662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:17.702583  682662 ssh_runner.go:195] Run: which lz4
	I0317 13:59:17.706362  682662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:17.710341  682662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:17.710372  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:19.011475  682662 crio.go:462] duration metric: took 1.305154533s to copy over tarball
	I0317 13:59:19.011575  682662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:21.330326  682662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.318697484s)
	I0317 13:59:21.330366  682662 crio.go:469] duration metric: took 2.318859908s to extract the tarball
	I0317 13:59:21.330377  682662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:21.368396  682662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:21.409403  682662 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:21.409435  682662 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:21.409446  682662 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.32.2 crio true true} ...
	I0317 13:59:21.409567  682662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0317 13:59:21.409728  682662 ssh_runner.go:195] Run: crio config
	I0317 13:59:21.461149  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:21.461173  682662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:21.461196  682662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-788750 NodeName:flannel-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:21.461312  682662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:21.461375  682662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:21.471315  682662 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:21.471401  682662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:21.480637  682662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:21.497818  682662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:21.514202  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0317 13:59:21.531846  682662 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:21.535852  682662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:21.547918  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:21.686995  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:21.707033  682662 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750 for IP: 192.168.72.30
	I0317 13:59:21.707066  682662 certs.go:194] generating shared ca certs ...
	I0317 13:59:21.707100  682662 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.707315  682662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:21.707394  682662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:21.707412  682662 certs.go:256] generating profile certs ...
	I0317 13:59:21.707485  682662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key
	I0317 13:59:21.707504  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt with IP's: []
	I0317 13:59:21.991318  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt ...
	I0317 13:59:21.991349  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: {Name:mk98eed9ca2b5d327d7f4f5299f99a2ef0fd27b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991510  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key ...
	I0317 13:59:21.991521  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.key: {Name:mkb9d21292c13affabb06e343bb09c1a56eddefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:21.991629  682662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262
	I0317 13:59:21.991650  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.30]
	I0317 13:59:22.386930  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 ...
	I0317 13:59:22.386968  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262: {Name:mk5b32d7f691721ce84195f520653f84677487de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387146  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 ...
	I0317 13:59:22.387165  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262: {Name:mk49fe6f886e1c3fa3806fbf01bfe3f58ce4f93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:22.387271  682662 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt
	I0317 13:59:22.387368  682662 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key.0ba5c262 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key
	I0317 13:59:22.387444  682662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key
	I0317 13:59:22.387468  682662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt with IP's: []
	I0317 13:59:23.150969  682662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt ...
	I0317 13:59:23.151001  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt: {Name:mke13a7275b9ea4a183b0de420ac1690d8c1d05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151192  682662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key ...
	I0317 13:59:23.151219  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key: {Name:mk045a87c5e6145ebe19bfd7ec6b3783a3d14258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:23.151427  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:23.151466  682662 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:23.151476  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:23.151497  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:23.151524  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:23.151569  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:23.151609  682662 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:23.152112  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:23.175083  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:23.197285  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:23.225143  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:23.247634  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:23.272656  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:23.306874  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:23.338116  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:23.368010  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:23.394314  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:23.423219  682662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:23.446176  682662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:23.463202  682662 ssh_runner.go:195] Run: openssl version
	I0317 13:59:23.469202  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:23.482769  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487375  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.487441  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:23.493591  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:23.505891  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:23.518142  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523622  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.523688  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:23.529371  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:23.542335  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:23.553110  682662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557761  682662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.557819  682662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:23.563002  682662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:23.572975  682662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:23.576788  682662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:23.576843  682662 kubeadm.go:392] StartCluster: {Name:flannel-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-788750 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:23.576909  682662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:23.576950  682662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:23.620638  682662 cri.go:89] found id: ""
	I0317 13:59:23.620723  682662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:23.630796  682662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:23.641676  682662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:23.651990  682662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:23.652016  682662 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:23.652066  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:23.662175  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:23.662253  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:23.673817  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:23.683465  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:23.683547  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:23.695127  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.708536  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:23.708603  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:23.720370  682662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:23.729693  682662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:23.729756  682662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:23.741346  682662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:23.795773  682662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:23.795912  682662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:23.888818  682662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:23.889041  682662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:23.889172  682662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:23.902164  682662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:20.416846  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.417388  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.417472  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.417375  684478 retry.go:31] will retry after 578.975886ms: waiting for domain to come up
	I0317 13:59:20.998084  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:20.998743  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:20.998773  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:20.998683  684478 retry.go:31] will retry after 1.168593486s: waiting for domain to come up
	I0317 13:59:22.168602  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:22.169205  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:22.169302  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:22.169195  684478 retry.go:31] will retry after 915.875846ms: waiting for domain to come up
	I0317 13:59:23.086435  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:23.086889  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:23.086917  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:23.086855  684478 retry.go:31] will retry after 1.782289012s: waiting for domain to come up
	I0317 13:59:24.872807  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:24.873338  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:24.873403  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:24.873330  684478 retry.go:31] will retry after 2.082516204s: waiting for domain to come up
	I0317 13:59:24.044110  682662 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:24.044245  682662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:24.044341  682662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:24.044450  682662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:24.201837  682662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:24.546018  682662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:24.644028  682662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:24.791251  682662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:24.791629  682662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.148014  682662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:25.148303  682662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-788750 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0317 13:59:25.299352  682662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:25.535177  682662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:25.769563  682662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:25.769811  682662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:25.913584  682662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:26.217258  682662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:26.606599  682662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:26.749144  682662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:26.904044  682662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:26.904808  682662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:26.907777  682662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:26.958055  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:26.958771  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:26.958797  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:26.958687  684478 retry.go:31] will retry after 1.918434497s: waiting for domain to come up
	I0317 13:59:28.884652  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:28.884965  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:28.885027  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:28.884968  684478 retry.go:31] will retry after 2.779630313s: waiting for domain to come up
	I0317 13:59:26.909655  682662 out.go:235]   - Booting up control plane ...
	I0317 13:59:26.909809  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:26.909938  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:26.910666  682662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:26.932841  682662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:26.939466  682662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:26.939639  682662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:27.099640  682662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:27.099824  682662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:28.099988  682662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001156397s
	I0317 13:59:28.100085  682662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:33.100467  682662 kubeadm.go:310] [api-check] The API server is healthy after 5.001005071s
	I0317 13:59:33.110998  682662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:33.130476  682662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:33.152222  682662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:33.152448  682662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:33.165204  682662 kubeadm.go:310] [bootstrap-token] Using token: wul87d.x4r8hdwyi1r15k1o
	I0317 13:59:33.166488  682662 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:33.166623  682662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:33.171916  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:33.180293  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:33.183680  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:33.187126  682662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:33.198883  682662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:33.509086  682662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:33.946270  682662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:34.504578  682662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:34.505419  682662 kubeadm.go:310] 
	I0317 13:59:34.505481  682662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:34.505490  682662 kubeadm.go:310] 
	I0317 13:59:34.505565  682662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:34.505572  682662 kubeadm.go:310] 
	I0317 13:59:34.505592  682662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:34.505640  682662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:34.505688  682662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:34.505694  682662 kubeadm.go:310] 
	I0317 13:59:34.505736  682662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:34.505761  682662 kubeadm.go:310] 
	I0317 13:59:34.505821  682662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:34.505829  682662 kubeadm.go:310] 
	I0317 13:59:34.505873  682662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:34.505941  682662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:34.506011  682662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:34.506018  682662 kubeadm.go:310] 
	I0317 13:59:34.506093  682662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:34.506173  682662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:34.506184  682662 kubeadm.go:310] 
	I0317 13:59:34.506253  682662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506349  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:34.506371  682662 kubeadm.go:310] 	--control-plane 
	I0317 13:59:34.506377  682662 kubeadm.go:310] 
	I0317 13:59:34.506447  682662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:34.506453  682662 kubeadm.go:310] 
	I0317 13:59:34.506520  682662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wul87d.x4r8hdwyi1r15k1o \
	I0317 13:59:34.506607  682662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:34.507486  682662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:34.507585  682662 cni.go:84] Creating CNI manager for "flannel"
	I0317 13:59:34.509544  682662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0317 13:59:31.666320  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:31.666834  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:31.666882  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:31.666805  684478 retry.go:31] will retry after 4.169301354s: waiting for domain to come up
	I0317 13:59:34.510695  682662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 13:59:34.516421  682662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 13:59:34.516443  682662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0317 13:59:34.533714  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 13:59:34.893048  682662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:34.893132  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:34.893146  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_34_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=flannel-788750 minikube.k8s.io/primary=true
	I0317 13:59:34.943374  682662 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:35.063217  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:35.563994  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.063612  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:36.563720  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.063942  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.564320  682662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:37.648519  682662 kubeadm.go:1113] duration metric: took 2.75546083s to wait for elevateKubeSystemPrivileges
	I0317 13:59:37.648579  682662 kubeadm.go:394] duration metric: took 14.071739112s to StartCluster
	I0317 13:59:37.648605  682662 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.648696  682662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:59:37.649597  682662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:37.649855  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:59:37.649879  682662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 13:59:37.649947  682662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:59:37.650030  682662 addons.go:69] Setting storage-provisioner=true in profile "flannel-788750"
	I0317 13:59:37.650048  682662 addons.go:238] Setting addon storage-provisioner=true in "flannel-788750"
	I0317 13:59:37.650053  682662 addons.go:69] Setting default-storageclass=true in profile "flannel-788750"
	I0317 13:59:37.650079  682662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-788750"
	I0317 13:59:37.650086  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.650103  682662 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:37.650523  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650547  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.650577  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.650701  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.651428  682662 out.go:177] * Verifying Kubernetes components...
	I0317 13:59:37.652854  682662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:37.666591  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0317 13:59:37.666955  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0317 13:59:37.667168  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667460  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.667722  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.667748  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.667987  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.668015  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.668092  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668273  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.668352  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.668884  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.668932  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.671496  682662 addons.go:238] Setting addon default-storageclass=true in "flannel-788750"
	I0317 13:59:37.671568  682662 host.go:66] Checking if "flannel-788750" exists ...
	I0317 13:59:37.671824  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.671869  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.684502  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0317 13:59:37.685086  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.685604  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.685635  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.685998  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.686195  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.687133  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0317 13:59:37.687558  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.687970  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.687999  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.688053  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.688335  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.688773  682662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:59:37.688810  682662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:59:37.689803  682662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:59:37.690875  682662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:37.690892  682662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:59:37.690913  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.694025  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694514  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.694551  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.694795  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.694967  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.695127  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.695254  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.705449  682662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0317 13:59:37.705987  682662 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:59:37.706407  682662 main.go:141] libmachine: Using API Version  1
	I0317 13:59:37.706421  682662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:59:37.706665  682662 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:59:37.706825  682662 main.go:141] libmachine: (flannel-788750) Calling .GetState
	I0317 13:59:37.708262  682662 main.go:141] libmachine: (flannel-788750) Calling .DriverName
	I0317 13:59:37.708483  682662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:37.708500  682662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:59:37.708528  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHHostname
	I0317 13:59:37.711278  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711693  682662 main.go:141] libmachine: (flannel-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:e8:19", ip: ""} in network mk-flannel-788750: {Iface:virbr4 ExpiryTime:2025-03-17 14:59:05 +0000 UTC Type:0 Mac:52:54:00:55:e8:19 Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:flannel-788750 Clientid:01:52:54:00:55:e8:19}
	I0317 13:59:37.711724  682662 main.go:141] libmachine: (flannel-788750) DBG | domain flannel-788750 has defined IP address 192.168.72.30 and MAC address 52:54:00:55:e8:19 in network mk-flannel-788750
	I0317 13:59:37.711884  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHPort
	I0317 13:59:37.712049  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHKeyPath
	I0317 13:59:37.712181  682662 main.go:141] libmachine: (flannel-788750) Calling .GetSSHUsername
	I0317 13:59:37.712280  682662 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/flannel-788750/id_rsa Username:docker}
	I0317 13:59:37.792332  682662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:59:37.833072  682662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:38.013202  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:59:38.016077  682662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:59:38.285460  682662 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0317 13:59:38.286336  682662 node_ready.go:35] waiting up to 15m0s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:38.286667  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.286688  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.286982  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287000  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287008  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.287015  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.287297  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.287316  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.287299  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.336875  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.336910  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.337207  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.337268  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.337215  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.523826  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.523846  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524131  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524148  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.524156  682662 main.go:141] libmachine: Making call to close driver server
	I0317 13:59:38.524163  682662 main.go:141] libmachine: (flannel-788750) Calling .Close
	I0317 13:59:38.524185  682662 main.go:141] libmachine: (flannel-788750) DBG | Closing plugin on server side
	I0317 13:59:38.524385  682662 main.go:141] libmachine: Successfully made call to close driver server
	I0317 13:59:38.524400  682662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 13:59:38.525951  682662 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 13:59:35.840720  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:35.841321  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find current IP address of domain bridge-788750 in network mk-bridge-788750
	I0317 13:59:35.841354  684423 main.go:141] libmachine: (bridge-788750) DBG | I0317 13:59:35.841303  684478 retry.go:31] will retry after 5.187885311s: waiting for domain to come up
	I0317 13:59:38.527111  682662 addons.go:514] duration metric: took 877.168808ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 13:59:38.789426  682662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-788750" context rescaled to 1 replicas
	I0317 13:59:41.035122  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.035760  684423 main.go:141] libmachine: (bridge-788750) found domain IP: 192.168.39.172
	I0317 13:59:41.035781  684423 main.go:141] libmachine: (bridge-788750) reserving static IP address...
	I0317 13:59:41.035790  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has current primary IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.036284  684423 main.go:141] libmachine: (bridge-788750) DBG | unable to find host DHCP lease matching {name: "bridge-788750", mac: "52:54:00:f1:de:c9", ip: "192.168.39.172"} in network mk-bridge-788750
	I0317 13:59:41.115751  684423 main.go:141] libmachine: (bridge-788750) reserved static IP address 192.168.39.172 for domain bridge-788750
	I0317 13:59:41.115782  684423 main.go:141] libmachine: (bridge-788750) DBG | Getting to WaitForSSH function...
	I0317 13:59:41.115798  684423 main.go:141] libmachine: (bridge-788750) waiting for SSH...
	I0317 13:59:41.118645  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119016  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.119063  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.119199  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH client type: external
	I0317 13:59:41.119225  684423 main.go:141] libmachine: (bridge-788750) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa (-rw-------)
	I0317 13:59:41.119256  684423 main.go:141] libmachine: (bridge-788750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0317 13:59:41.119286  684423 main.go:141] libmachine: (bridge-788750) DBG | About to run SSH command:
	I0317 13:59:41.119302  684423 main.go:141] libmachine: (bridge-788750) DBG | exit 0
	I0317 13:59:41.239165  684423 main.go:141] libmachine: (bridge-788750) DBG | SSH cmd err, output: <nil>: 
	I0317 13:59:41.239469  684423 main.go:141] libmachine: (bridge-788750) KVM machine creation complete
	I0317 13:59:41.239768  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:41.240358  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240533  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:41.240709  684423 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0317 13:59:41.240725  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 13:59:41.242592  684423 main.go:141] libmachine: Detecting operating system of created instance...
	I0317 13:59:41.242609  684423 main.go:141] libmachine: Waiting for SSH to be available...
	I0317 13:59:41.242616  684423 main.go:141] libmachine: Getting to WaitForSSH function...
	I0317 13:59:41.242621  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.245217  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245580  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.245613  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.245737  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.245916  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246031  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.246188  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.246355  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.246654  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.246667  684423 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0317 13:59:41.346704  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.346735  684423 main.go:141] libmachine: Detecting the provisioner...
	I0317 13:59:41.346747  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.349542  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.349892  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.349914  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.350053  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.350256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350419  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.350553  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.350715  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.350978  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.350993  684423 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0317 13:59:41.451908  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0317 13:59:41.452000  684423 main.go:141] libmachine: found compatible host: buildroot
	I0317 13:59:41.452014  684423 main.go:141] libmachine: Provisioning with buildroot...
	I0317 13:59:41.452029  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452354  684423 buildroot.go:166] provisioning hostname "bridge-788750"
	I0317 13:59:41.452380  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.452554  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.455040  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455399  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.455424  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.455605  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.455777  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.455930  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.456042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.456163  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.456436  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.456451  684423 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-788750 && echo "bridge-788750" | sudo tee /etc/hostname
	I0317 13:59:41.567818  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-788750
	
	I0317 13:59:41.567853  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.570485  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570807  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.570833  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.570996  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.571193  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571364  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.571484  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.571645  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.571862  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.571897  684423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-788750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-788750/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-788750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:59:41.678869  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:59:41.678904  684423 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
	I0317 13:59:41.678929  684423 buildroot.go:174] setting up certificates
	I0317 13:59:41.678941  684423 provision.go:84] configureAuth start
	I0317 13:59:41.678954  684423 main.go:141] libmachine: (bridge-788750) Calling .GetMachineName
	I0317 13:59:41.679256  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:41.681754  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682060  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.682086  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.682262  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.684392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684679  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.684700  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.684844  684423 provision.go:143] copyHostCerts
	I0317 13:59:41.684907  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem, removing ...
	I0317 13:59:41.684932  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem
	I0317 13:59:41.685003  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
	I0317 13:59:41.685129  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem, removing ...
	I0317 13:59:41.685142  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem
	I0317 13:59:41.685177  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
	I0317 13:59:41.685262  684423 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem, removing ...
	I0317 13:59:41.685272  684423 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem
	I0317 13:59:41.685301  684423 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
	I0317 13:59:41.685372  684423 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.bridge-788750 san=[127.0.0.1 192.168.39.172 bridge-788750 localhost minikube]
	I0317 13:59:41.821887  684423 provision.go:177] copyRemoteCerts
	I0317 13:59:41.821963  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:59:41.821998  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.824975  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825287  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.825315  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.825479  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.825693  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.825854  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.826011  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:41.905677  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 13:59:41.929529  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:59:41.950905  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 13:59:41.975981  684423 provision.go:87] duration metric: took 297.025637ms to configureAuth
	I0317 13:59:41.976008  684423 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:59:41.976155  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:59:41.976223  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:41.978872  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979159  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:41.979182  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:41.979352  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:41.979562  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979759  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:41.979913  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:41.980059  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:41.980356  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:41.980382  684423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0317 13:59:42.210802  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0317 13:59:42.210834  684423 main.go:141] libmachine: Checking connection to Docker...
	I0317 13:59:42.210843  684423 main.go:141] libmachine: (bridge-788750) Calling .GetURL
	I0317 13:59:42.212236  684423 main.go:141] libmachine: (bridge-788750) DBG | using libvirt version 6000000
	I0317 13:59:42.214601  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.214997  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.215057  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.215186  684423 main.go:141] libmachine: Docker is up and running!
	I0317 13:59:42.215202  684423 main.go:141] libmachine: Reticulating splines...
	I0317 13:59:42.215215  684423 client.go:171] duration metric: took 25.952374084s to LocalClient.Create
	I0317 13:59:42.215251  684423 start.go:167] duration metric: took 25.952448094s to libmachine.API.Create "bridge-788750"
	I0317 13:59:42.215261  684423 start.go:293] postStartSetup for "bridge-788750" (driver="kvm2")
	I0317 13:59:42.215270  684423 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:59:42.215295  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.215556  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:59:42.215589  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.217971  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218424  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.218456  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.218633  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.218799  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.218975  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.219128  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.300906  684423 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:59:42.304516  684423 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:59:42.304543  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
	I0317 13:59:42.304605  684423 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
	I0317 13:59:42.304685  684423 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem -> 6291882.pem in /etc/ssl/certs
	I0317 13:59:42.304772  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:59:42.313026  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:42.333975  684423 start.go:296] duration metric: took 118.700242ms for postStartSetup
	I0317 13:59:42.334033  684423 main.go:141] libmachine: (bridge-788750) Calling .GetConfigRaw
	I0317 13:59:42.334606  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.337068  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337371  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.337392  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.337630  684423 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/config.json ...
	I0317 13:59:42.337824  684423 start.go:128] duration metric: took 26.101149226s to createHost
	I0317 13:59:42.337851  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.339859  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340209  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.340235  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.340363  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.340551  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340698  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.340815  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.340963  684423 main.go:141] libmachine: Using SSH client type: native
	I0317 13:59:42.341165  684423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0317 13:59:42.341174  684423 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:59:42.439850  684423 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742219982.415531589
	
	I0317 13:59:42.439872  684423 fix.go:216] guest clock: 1742219982.415531589
	I0317 13:59:42.439880  684423 fix.go:229] Guest: 2025-03-17 13:59:42.415531589 +0000 UTC Remote: 2025-03-17 13:59:42.337836583 +0000 UTC m=+27.394798300 (delta=77.695006ms)
	I0317 13:59:42.439905  684423 fix.go:200] guest clock delta is within tolerance: 77.695006ms
	I0317 13:59:42.439912  684423 start.go:83] releasing machines lock for "bridge-788750", held for 26.203397217s
	I0317 13:59:42.439939  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.440201  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:42.442831  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443753  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.443782  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.443987  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444519  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444688  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 13:59:42.444782  684423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 13:59:42.444829  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.444939  684423 ssh_runner.go:195] Run: cat /version.json
	I0317 13:59:42.444960  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 13:59:42.447411  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447758  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.447784  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447802  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.447875  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448064  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448237  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448251  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:42.448269  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:42.448387  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.448444  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 13:59:42.448571  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 13:59:42.448710  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 13:59:42.448828  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 13:59:42.541855  684423 ssh_runner.go:195] Run: systemctl --version
	I0317 13:59:42.548086  684423 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0317 13:59:42.702999  684423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:59:42.708812  684423 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:59:42.708887  684423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:59:42.723697  684423 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:59:42.723726  684423 start.go:495] detecting cgroup driver to use...
	I0317 13:59:42.723794  684423 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:59:42.739584  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:59:42.752485  684423 docker.go:217] disabling cri-docker service (if available) ...
	I0317 13:59:42.752559  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 13:59:42.765024  684423 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 13:59:42.777346  684423 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 13:59:42.885029  684423 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 13:59:43.046402  684423 docker.go:233] disabling docker service ...
	I0317 13:59:43.046499  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 13:59:43.060044  684423 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 13:59:43.072350  684423 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 13:59:43.187346  684423 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 13:59:43.322509  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 13:59:43.337797  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:59:43.358051  684423 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0317 13:59:43.358120  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.369454  684423 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0317 13:59:43.369564  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.381103  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.392551  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.404000  684423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:59:43.415664  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.426423  684423 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.448074  684423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0317 13:59:43.458706  684423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:59:43.470365  684423 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:59:43.470437  684423 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:59:43.483041  684423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:59:43.493515  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:43.630579  684423 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0317 13:59:43.729029  684423 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0317 13:59:43.729100  684423 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0317 13:59:43.733964  684423 start.go:563] Will wait 60s for crictl version
	I0317 13:59:43.734029  684423 ssh_runner.go:195] Run: which crictl
	I0317 13:59:43.737635  684423 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:59:43.773498  684423 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0317 13:59:43.773598  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.799596  684423 ssh_runner.go:195] Run: crio --version
	I0317 13:59:43.828121  684423 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0317 13:59:43.829699  684423 main.go:141] libmachine: (bridge-788750) Calling .GetIP
	I0317 13:59:43.832890  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833374  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 13:59:43.833402  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 13:59:43.833642  684423 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0317 13:59:43.838613  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:43.856973  684423 kubeadm.go:883] updating cluster {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:59:43.857104  684423 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0317 13:59:43.857172  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:43.890166  684423 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0317 13:59:43.890276  684423 ssh_runner.go:195] Run: which lz4
	I0317 13:59:43.894425  684423 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:59:43.898332  684423 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:59:43.898364  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0317 13:59:40.289428  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:42.289498  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:44.290188  682662 node_ready.go:53] node "flannel-788750" has status "Ready":"False"
	I0317 13:59:45.225798  684423 crio.go:462] duration metric: took 1.33140551s to copy over tarball
	I0317 13:59:45.225877  684423 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:59:47.413334  684423 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187422971s)
	I0317 13:59:47.413377  684423 crio.go:469] duration metric: took 2.187543023s to extract the tarball
	I0317 13:59:47.413388  684423 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:59:47.449993  684423 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 13:59:47.487606  684423 crio.go:514] all images are preloaded for cri-o runtime.
	I0317 13:59:47.487630  684423 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:59:47.487638  684423 kubeadm.go:934] updating node { 192.168.39.172 8443 v1.32.2 crio true true} ...
	I0317 13:59:47.487749  684423 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-788750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0317 13:59:47.487816  684423 ssh_runner.go:195] Run: crio config
	I0317 13:59:47.534961  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:47.535001  684423 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:59:47.535023  684423 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-788750 NodeName:bridge-788750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:59:47.535182  684423 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-788750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:59:47.535265  684423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:59:47.545198  684423 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:59:47.545288  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:59:47.554688  684423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 13:59:47.570775  684423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:59:47.586074  684423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0317 13:59:47.601202  684423 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0317 13:59:47.604740  684423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:59:47.616014  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:59:47.728366  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:59:47.743468  684423 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750 for IP: 192.168.39.172
	I0317 13:59:47.743513  684423 certs.go:194] generating shared ca certs ...
	I0317 13:59:47.743563  684423 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.743737  684423 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
	I0317 13:59:47.743797  684423 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
	I0317 13:59:47.743818  684423 certs.go:256] generating profile certs ...
	I0317 13:59:47.743881  684423 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key
	I0317 13:59:47.743903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt with IP's: []
	I0317 13:59:47.925990  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt ...
	I0317 13:59:47.926022  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.crt: {Name:mk57b03e60343324f33ad0a804eeb5fac91ff61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926184  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key ...
	I0317 13:59:47.926194  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/client.key: {Name:mka3fd5553386d9680255eba9e4b30307d081270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:47.926268  684423 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf
	I0317 13:59:47.926283  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.172]
	I0317 13:59:48.596199  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf ...
	I0317 13:59:48.596251  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf: {Name:mkbe02ed764b875a14246503fcc050fdb71db7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596488  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf ...
	I0317 13:59:48.596518  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf: {Name:mk3fa88c7fab72a1bf633ff2d7f92bde1aceb5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.596660  684423 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt
	I0317 13:59:48.596782  684423 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key.cf2a90cf -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key
	I0317 13:59:48.596878  684423 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key
	I0317 13:59:48.596903  684423 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt with IP's: []
	I0317 13:59:48.787513  684423 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt ...
	I0317 13:59:48.787555  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt: {Name:mkd3b1b33b0e3868ee38a25e6cd6690a1040bc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787732  684423 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key ...
	I0317 13:59:48.787744  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key: {Name:mkd529fbd19dbc16b398c1bddab0b44e7d4e1345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:59:48.787912  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem (1338 bytes)
	W0317 13:59:48.787955  684423 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188_empty.pem, impossibly tiny 0 bytes
	I0317 13:59:48.787965  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 13:59:48.787986  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
	I0317 13:59:48.788012  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
	I0317 13:59:48.788046  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
	I0317 13:59:48.788086  684423 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem (1708 bytes)
	I0317 13:59:48.788618  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:59:48.815047  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:59:48.837091  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:59:48.858337  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 13:59:48.882013  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:59:48.903168  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:59:48.925979  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:59:48.946611  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/bridge-788750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:59:48.970676  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/629188.pem --> /usr/share/ca-certificates/629188.pem (1338 bytes)
	I0317 13:59:48.997588  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/ssl/certs/6291882.pem --> /usr/share/ca-certificates/6291882.pem (1708 bytes)
	I0317 13:59:49.019322  684423 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:59:49.041024  684423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:59:49.056421  684423 ssh_runner.go:195] Run: openssl version
	I0317 13:59:49.061875  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/629188.pem && ln -fs /usr/share/ca-certificates/629188.pem /etc/ssl/certs/629188.pem"
	I0317 13:59:49.072082  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076316  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:49 /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.076377  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/629188.pem
	I0317 13:59:49.081812  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/629188.pem /etc/ssl/certs/51391683.0"
	I0317 13:59:49.092035  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6291882.pem && ln -fs /usr/share/ca-certificates/6291882.pem /etc/ssl/certs/6291882.pem"
	I0317 13:59:49.102272  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106676  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:49 /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.106727  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6291882.pem
	I0317 13:59:49.112133  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6291882.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:59:49.121990  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:59:49.131611  684423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135725  684423 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.135803  684423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:59:49.141146  684423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:59:49.151486  684423 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:59:49.155121  684423 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:59:49.155172  684423 kubeadm.go:392] StartCluster: {Name:bridge-788750 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-788750 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:59:49.155238  684423 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0317 13:59:49.155277  684423 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 13:59:49.187716  684423 cri.go:89] found id: ""
	I0317 13:59:49.187787  684423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:59:49.197392  684423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:59:49.206456  684423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:59:49.215648  684423 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:59:49.215666  684423 kubeadm.go:157] found existing configuration files:
	
	I0317 13:59:49.215701  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:59:49.224457  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:59:49.224510  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:59:49.233665  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:59:49.245183  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:59:49.245257  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:59:49.257317  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.269822  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:59:49.269892  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:59:49.281019  684423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:59:49.291173  684423 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:59:49.291250  684423 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:59:49.303204  684423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:59:49.350647  684423 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:59:49.350717  684423 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:59:49.447801  684423 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:59:49.447928  684423 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:59:49.448087  684423 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:59:49.457405  684423 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:59:49.542153  684423 out.go:235]   - Generating certificates and keys ...
	I0317 13:59:49.542293  684423 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:59:49.542375  684423 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:59:49.722255  684423 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:59:49.810201  684423 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:59:45.790781  682662 node_ready.go:49] node "flannel-788750" has status "Ready":"True"
	I0317 13:59:45.790806  682662 node_ready.go:38] duration metric: took 7.504444131s for node "flannel-788750" to be "Ready" ...
	I0317 13:59:45.790816  682662 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:59:45.797709  682662 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 13:59:47.804134  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:50.058796  684423 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:59:50.325974  684423 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:59:50.766611  684423 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:59:50.766821  684423 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:50.962806  684423 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:59:50.962985  684423 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-788750 localhost] and IPs [192.168.39.172 127.0.0.1 ::1]
	I0317 13:59:51.069262  684423 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:59:51.154142  684423 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:59:51.485810  684423 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:59:51.486035  684423 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:59:51.589554  684423 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:59:51.703382  684423 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:59:51.818706  684423 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:59:51.939373  684423 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:59:52.087035  684423 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:59:52.087704  684423 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:59:52.090229  684423 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:59:52.093249  684423 out.go:235]   - Booting up control plane ...
	I0317 13:59:52.093382  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:59:52.093493  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:59:52.093923  684423 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:59:52.111087  684423 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:59:52.117277  684423 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:59:52.117337  684423 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:59:52.258455  684423 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:59:52.258600  684423 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:59:53.259182  684423 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00165578s
	I0317 13:59:53.259294  684423 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:59:50.717873  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:52.802425  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:54.804753  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:57.758540  684423 kubeadm.go:310] [api-check] The API server is healthy after 4.501842676s
	I0317 13:59:57.770918  684423 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:59:57.784450  684423 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:59:57.821683  684423 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:59:57.821916  684423 kubeadm.go:310] [mark-control-plane] Marking the node bridge-788750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:59:57.835685  684423 kubeadm.go:310] [bootstrap-token] Using token: 6r2rfy.f4amir38rs4aheab
	I0317 13:59:57.836800  684423 out.go:235]   - Configuring RBAC rules ...
	I0317 13:59:57.836921  684423 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:59:57.842871  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:59:57.849820  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:59:57.853155  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:59:57.856545  684423 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:59:57.862086  684423 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:59:58.165290  684423 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:59:58.587281  684423 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:59:59.166763  684423 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:59:59.167778  684423 kubeadm.go:310] 
	I0317 13:59:59.167887  684423 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:59:59.167901  684423 kubeadm.go:310] 
	I0317 13:59:59.167991  684423 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:59:59.168016  684423 kubeadm.go:310] 
	I0317 13:59:59.168054  684423 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:59:59.168111  684423 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:59:59.168153  684423 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:59:59.168159  684423 kubeadm.go:310] 
	I0317 13:59:59.168201  684423 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:59:59.168207  684423 kubeadm.go:310] 
	I0317 13:59:59.168245  684423 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:59:59.168250  684423 kubeadm.go:310] 
	I0317 13:59:59.168299  684423 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:59:59.168425  684423 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:59:59.168502  684423 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:59:59.168521  684423 kubeadm.go:310] 
	I0317 13:59:59.168648  684423 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:59:59.168770  684423 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:59:59.168781  684423 kubeadm.go:310] 
	I0317 13:59:59.168894  684423 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169039  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
	I0317 13:59:59.169080  684423 kubeadm.go:310] 	--control-plane 
	I0317 13:59:59.169096  684423 kubeadm.go:310] 
	I0317 13:59:59.169180  684423 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:59:59.169193  684423 kubeadm.go:310] 
	I0317 13:59:59.169265  684423 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6r2rfy.f4amir38rs4aheab \
	I0317 13:59:59.169358  684423 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 
	I0317 13:59:59.170059  684423 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:59:59.170143  684423 cni.go:84] Creating CNI manager for "bridge"
	I0317 13:59:59.171940  684423 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:59:59.173180  684423 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:59:59.183169  684423 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:59:59.199645  684423 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:59:59.199744  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.199776  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-788750 minikube.k8s.io/updated_at=2025_03_17T13_59_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=bridge-788750 minikube.k8s.io/primary=true
	I0317 13:59:59.239955  684423 ops.go:34] apiserver oom_adj: -16
	I0317 13:59:59.366223  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:59.867211  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:59:57.304825  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 13:59:59.803981  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:00.367289  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:00.866372  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.366507  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:01.866445  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.366509  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:02.866437  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.367015  684423 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 14:00:03.450761  684423 kubeadm.go:1113] duration metric: took 4.251073397s to wait for elevateKubeSystemPrivileges
	I0317 14:00:03.450805  684423 kubeadm.go:394] duration metric: took 14.295636291s to StartCluster
	I0317 14:00:03.450831  684423 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.450907  684423 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 14:00:03.451925  684423 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 14:00:03.452144  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 14:00:03.452156  684423 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 14:00:03.452210  684423 addons.go:69] Setting storage-provisioner=true in profile "bridge-788750"
	I0317 14:00:03.452229  684423 addons.go:238] Setting addon storage-provisioner=true in "bridge-788750"
	I0317 14:00:03.452140  684423 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0317 14:00:03.452274  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.452383  684423 config.go:182] Loaded profile config "bridge-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 14:00:03.452233  684423 addons.go:69] Setting default-storageclass=true in profile "bridge-788750"
	I0317 14:00:03.452450  684423 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-788750"
	I0317 14:00:03.452759  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452797  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.452814  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.452848  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.454590  684423 out.go:177] * Verifying Kubernetes components...
	I0317 14:00:03.456094  684423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 14:00:03.468607  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0317 14:00:03.468791  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44917
	I0317 14:00:03.469225  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469232  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.469737  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469751  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.469902  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.469927  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.470138  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470336  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.470543  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.470733  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.470780  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.474090  684423 addons.go:238] Setting addon default-storageclass=true in "bridge-788750"
	I0317 14:00:03.474136  684423 host.go:66] Checking if "bridge-788750" exists ...
	I0317 14:00:03.474497  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.474557  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.490576  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0317 14:00:03.491139  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.491781  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.491813  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.492238  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.492487  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.493292  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I0317 14:00:03.493769  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.494289  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.494321  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.494560  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.494679  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.495346  684423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 14:00:03.495400  684423 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 14:00:03.496399  684423 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 14:00:02.303406  682662 pod_ready.go:103] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:03.303401  682662 pod_ready.go:93] pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.303427  682662 pod_ready.go:82] duration metric: took 17.505677844s for pod "coredns-668d6bf9bc-vxj99" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.303436  682662 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306929  682662 pod_ready.go:93] pod "etcd-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.306947  682662 pod_ready.go:82] duration metric: took 3.50631ms for pod "etcd-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.306955  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310273  682662 pod_ready.go:93] pod "kube-apiserver-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.310298  682662 pod_ready.go:82] duration metric: took 3.335994ms for pod "kube-apiserver-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.310311  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314183  682662 pod_ready.go:93] pod "kube-controller-manager-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.314198  682662 pod_ready.go:82] duration metric: took 3.880278ms for pod "kube-controller-manager-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.314205  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318021  682662 pod_ready.go:93] pod "kube-proxy-drfjv" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.318036  682662 pod_ready.go:82] duration metric: took 3.826269ms for pod "kube-proxy-drfjv" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.318043  682662 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702302  682662 pod_ready.go:93] pod "kube-scheduler-flannel-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:03.702331  682662 pod_ready.go:82] duration metric: took 384.281244ms for pod "kube-scheduler-flannel-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:03.702346  682662 pod_ready.go:39] duration metric: took 17.911515691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:03.702367  682662 api_server.go:52] waiting for apiserver process to appear ...
	I0317 14:00:03.702433  682662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 14:00:03.717069  682662 api_server.go:72] duration metric: took 26.067154095s to wait for apiserver process to appear ...
	I0317 14:00:03.717103  682662 api_server.go:88] waiting for apiserver healthz status ...
	I0317 14:00:03.717125  682662 api_server.go:253] Checking apiserver healthz at https://192.168.72.30:8443/healthz ...
	I0317 14:00:03.722046  682662 api_server.go:279] https://192.168.72.30:8443/healthz returned 200:
	ok
	I0317 14:00:03.723179  682662 api_server.go:141] control plane version: v1.32.2
	I0317 14:00:03.723202  682662 api_server.go:131] duration metric: took 6.092065ms to wait for apiserver health ...
	I0317 14:00:03.723210  682662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 14:00:03.901881  682662 system_pods.go:59] 7 kube-system pods found
	I0317 14:00:03.901916  682662 system_pods.go:61] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:03.901922  682662 system_pods.go:61] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:03.901926  682662 system_pods.go:61] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:03.901930  682662 system_pods.go:61] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:03.901934  682662 system_pods.go:61] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:03.901937  682662 system_pods.go:61] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:03.901940  682662 system_pods.go:61] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:03.901947  682662 system_pods.go:74] duration metric: took 178.731729ms to wait for pod list to return data ...
	I0317 14:00:03.901954  682662 default_sa.go:34] waiting for default service account to be created ...
	I0317 14:00:04.103094  682662 default_sa.go:45] found service account: "default"
	I0317 14:00:04.103124  682662 default_sa.go:55] duration metric: took 201.164871ms for default service account to be created ...
	I0317 14:00:04.103135  682662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 14:00:04.301791  682662 system_pods.go:86] 7 kube-system pods found
	I0317 14:00:04.301829  682662 system_pods.go:89] "coredns-668d6bf9bc-vxj99" [01dabec4-3c24-4158-be92-c3977fb97dfa] Running
	I0317 14:00:04.301836  682662 system_pods.go:89] "etcd-flannel-788750" [59ecce98-e332-4794-969b-ab4e7f6ed07d] Running
	I0317 14:00:04.301840  682662 system_pods.go:89] "kube-apiserver-flannel-788750" [3e3d9c24-5edc-41eb-9283-29aa99fc1350] Running
	I0317 14:00:04.301843  682662 system_pods.go:89] "kube-controller-manager-flannel-788750" [f35d013a-b551-449c-826b-c131d053ca3b] Running
	I0317 14:00:04.301847  682662 system_pods.go:89] "kube-proxy-drfjv" [4f07f0b7-e946-4538-b142-897bdc2bb75d] Running
	I0317 14:00:04.301850  682662 system_pods.go:89] "kube-scheduler-flannel-788750" [ab721524-aac4-418b-990e-1ff6b8018936] Running
	I0317 14:00:04.301854  682662 system_pods.go:89] "storage-provisioner" [4e157f14-2a65-4439-ac31-a04e5cda8332] Running
	I0317 14:00:04.301864  682662 system_pods.go:126] duration metric: took 198.721059ms to wait for k8s-apps to be running ...
	I0317 14:00:04.301875  682662 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 14:00:04.301935  682662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 14:00:04.316032  682662 system_svc.go:56] duration metric: took 14.14678ms WaitForService to wait for kubelet
	I0317 14:00:04.316064  682662 kubeadm.go:582] duration metric: took 26.666157602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 14:00:04.316080  682662 node_conditions.go:102] verifying NodePressure condition ...
	I0317 14:00:04.501869  682662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 14:00:04.501907  682662 node_conditions.go:123] node cpu capacity is 2
	I0317 14:00:04.501923  682662 node_conditions.go:105] duration metric: took 185.838303ms to run NodePressure ...
	I0317 14:00:04.501938  682662 start.go:241] waiting for startup goroutines ...
	I0317 14:00:04.501947  682662 start.go:246] waiting for cluster config update ...
	I0317 14:00:04.501961  682662 start.go:255] writing updated cluster config ...
	I0317 14:00:04.502390  682662 ssh_runner.go:195] Run: rm -f paused
	I0317 14:00:04.560415  682662 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 14:00:04.562095  682662 out.go:177] * Done! kubectl is now configured to use "flannel-788750" cluster and "default" namespace by default
	I0317 14:00:03.498107  684423 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:03.498129  684423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 14:00:03.498150  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.501995  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502514  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.502535  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.502849  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.503042  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.503202  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.503328  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.513291  684423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I0317 14:00:03.513983  684423 main.go:141] libmachine: () Calling .GetVersion
	I0317 14:00:03.514587  684423 main.go:141] libmachine: Using API Version  1
	I0317 14:00:03.514619  684423 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 14:00:03.515049  684423 main.go:141] libmachine: () Calling .GetMachineName
	I0317 14:00:03.515268  684423 main.go:141] libmachine: (bridge-788750) Calling .GetState
	I0317 14:00:03.516963  684423 main.go:141] libmachine: (bridge-788750) Calling .DriverName
	I0317 14:00:03.517201  684423 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.517224  684423 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 14:00:03.517247  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHHostname
	I0317 14:00:03.520046  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520541  684423 main.go:141] libmachine: (bridge-788750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:de:c9", ip: ""} in network mk-bridge-788750: {Iface:virbr1 ExpiryTime:2025-03-17 14:59:32 +0000 UTC Type:0 Mac:52:54:00:f1:de:c9 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:bridge-788750 Clientid:01:52:54:00:f1:de:c9}
	I0317 14:00:03.520586  684423 main.go:141] libmachine: (bridge-788750) DBG | domain bridge-788750 has defined IP address 192.168.39.172 and MAC address 52:54:00:f1:de:c9 in network mk-bridge-788750
	I0317 14:00:03.520647  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHPort
	I0317 14:00:03.520817  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHKeyPath
	I0317 14:00:03.520958  684423 main.go:141] libmachine: (bridge-788750) Calling .GetSSHUsername
	I0317 14:00:03.521075  684423 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/bridge-788750/id_rsa Username:docker}
	I0317 14:00:03.660151  684423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 14:00:03.679967  684423 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 14:00:03.844532  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 14:00:03.876251  684423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 14:00:04.011057  684423 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0317 14:00:04.012032  684423 node_ready.go:35] waiting up to 15m0s for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024328  684423 node_ready.go:49] node "bridge-788750" has status "Ready":"True"
	I0317 14:00:04.024353  684423 node_ready.go:38] duration metric: took 12.290285ms for node "bridge-788750" to be "Ready" ...
	I0317 14:00:04.024365  684423 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:04.028595  684423 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:04.253238  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253271  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253666  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.253690  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.253696  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.253704  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.253714  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.253988  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.254006  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.262922  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.262941  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.263255  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.263297  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.263312  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.515692  684423 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-788750" context rescaled to 1 replicas
	I0317 14:00:04.766030  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766064  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766393  684423 main.go:141] libmachine: (bridge-788750) DBG | Closing plugin on server side
	I0317 14:00:04.766450  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766466  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.766480  684423 main.go:141] libmachine: Making call to close driver server
	I0317 14:00:04.766489  684423 main.go:141] libmachine: (bridge-788750) Calling .Close
	I0317 14:00:04.766715  684423 main.go:141] libmachine: Successfully made call to close driver server
	I0317 14:00:04.766733  684423 main.go:141] libmachine: Making call to close connection to plugin binary
	I0317 14:00:04.768395  684423 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 14:00:04.769582  684423 addons.go:514] duration metric: took 1.317420787s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 14:00:08.100227  673643 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0317 14:00:08.100326  673643 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0317 14:00:08.101702  673643 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0317 14:00:08.101771  673643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 14:00:08.101843  673643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 14:00:08.101949  673643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 14:00:08.102103  673643 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0317 14:00:08.102213  673643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 14:00:08.103900  673643 out.go:235]   - Generating certificates and keys ...
	I0317 14:00:08.103990  673643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 14:00:08.104047  673643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 14:00:08.104124  673643 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0317 14:00:08.104200  673643 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0317 14:00:08.104303  673643 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0317 14:00:08.104384  673643 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0317 14:00:08.104471  673643 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0317 14:00:08.104558  673643 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0317 14:00:08.104655  673643 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0317 14:00:08.104750  673643 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0317 14:00:08.104799  673643 kubeadm.go:310] [certs] Using the existing "sa" key
	I0317 14:00:08.104865  673643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 14:00:08.104953  673643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 14:00:08.105028  673643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 14:00:08.105106  673643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 14:00:08.105200  673643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 14:00:08.105374  673643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 14:00:08.105449  673643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 14:00:08.105497  673643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 14:00:08.105613  673643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 14:00:08.107093  673643 out.go:235]   - Booting up control plane ...
	I0317 14:00:08.107203  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 14:00:08.107321  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 14:00:08.107412  673643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 14:00:08.107544  673643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 14:00:08.107730  673643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0317 14:00:08.107811  673643 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0317 14:00:08.107903  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108136  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108241  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108504  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108614  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.108874  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.108968  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109174  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109230  673643 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0317 14:00:08.109440  673643 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0317 14:00:08.109465  673643 kubeadm.go:310] 
	I0317 14:00:08.109515  673643 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0317 14:00:08.109565  673643 kubeadm.go:310] 		timed out waiting for the condition
	I0317 14:00:08.109575  673643 kubeadm.go:310] 
	I0317 14:00:08.109617  673643 kubeadm.go:310] 	This error is likely caused by:
	I0317 14:00:08.109657  673643 kubeadm.go:310] 		- The kubelet is not running
	I0317 14:00:08.109782  673643 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0317 14:00:08.109799  673643 kubeadm.go:310] 
	I0317 14:00:08.109930  673643 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0317 14:00:08.109984  673643 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0317 14:00:08.110027  673643 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0317 14:00:08.110035  673643 kubeadm.go:310] 
	I0317 14:00:08.110118  673643 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0317 14:00:08.110184  673643 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0317 14:00:08.110190  673643 kubeadm.go:310] 
	I0317 14:00:08.110328  673643 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0317 14:00:08.110435  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0317 14:00:08.110496  673643 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0317 14:00:08.110562  673643 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0317 14:00:08.110602  673643 kubeadm.go:310] 
	I0317 14:00:08.110625  673643 kubeadm.go:394] duration metric: took 7m57.828587617s to StartCluster
	I0317 14:00:08.110682  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0317 14:00:08.110737  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 14:00:08.142741  673643 cri.go:89] found id: ""
	I0317 14:00:08.142781  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.142795  673643 logs.go:284] No container was found matching "kube-apiserver"
	I0317 14:00:08.142804  673643 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0317 14:00:08.142877  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 14:00:08.174753  673643 cri.go:89] found id: ""
	I0317 14:00:08.174784  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.174796  673643 logs.go:284] No container was found matching "etcd"
	I0317 14:00:08.174804  673643 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0317 14:00:08.174859  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 14:00:08.204965  673643 cri.go:89] found id: ""
	I0317 14:00:08.204997  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.205009  673643 logs.go:284] No container was found matching "coredns"
	I0317 14:00:08.205017  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0317 14:00:08.205081  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 14:00:08.235717  673643 cri.go:89] found id: ""
	I0317 14:00:08.235749  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.235757  673643 logs.go:284] No container was found matching "kube-scheduler"
	I0317 14:00:08.235767  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0317 14:00:08.235833  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 14:00:08.265585  673643 cri.go:89] found id: ""
	I0317 14:00:08.265613  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.265623  673643 logs.go:284] No container was found matching "kube-proxy"
	I0317 14:00:08.265631  673643 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 14:00:08.265718  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 14:00:08.295600  673643 cri.go:89] found id: ""
	I0317 14:00:08.295629  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.295641  673643 logs.go:284] No container was found matching "kube-controller-manager"
	I0317 14:00:08.295648  673643 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0317 14:00:08.295713  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 14:00:08.327749  673643 cri.go:89] found id: ""
	I0317 14:00:08.327778  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.327787  673643 logs.go:284] No container was found matching "kindnet"
	I0317 14:00:08.327794  673643 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0317 14:00:08.327855  673643 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0317 14:00:08.359913  673643 cri.go:89] found id: ""
	I0317 14:00:08.359944  673643 logs.go:282] 0 containers: []
	W0317 14:00:08.359952  673643 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0317 14:00:08.359962  673643 logs.go:123] Gathering logs for container status ...
	I0317 14:00:08.359975  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 14:00:08.396929  673643 logs.go:123] Gathering logs for kubelet ...
	I0317 14:00:08.396959  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 14:00:08.451498  673643 logs.go:123] Gathering logs for dmesg ...
	I0317 14:00:08.451556  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 14:00:08.464742  673643 logs.go:123] Gathering logs for describe nodes ...
	I0317 14:00:08.464771  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0317 14:00:08.537703  673643 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0317 14:00:08.537733  673643 logs.go:123] Gathering logs for CRI-O ...
	I0317 14:00:08.537749  673643 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0317 14:00:08.658936  673643 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0317 14:00:08.659006  673643 out.go:270] * 
	W0317 14:00:08.659061  673643 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.659074  673643 out.go:270] * 
	W0317 14:00:08.659944  673643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 14:00:08.663521  673643 out.go:201] 
	W0317 14:00:08.664750  673643 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0317 14:00:08.664794  673643 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0317 14:00:08.664812  673643 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0317 14:00:08.666351  673643 out.go:201] 
	I0317 14:00:06.033655  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:08.034208  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:10.034443  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:12.534251  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:14.536107  684423 pod_ready.go:103] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status "Ready":"False"
	I0317 14:00:15.534010  684423 pod_ready.go:98] pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.172 HostIPs:[{IP:192.168.39
.172}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-03-17 14:00:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-03-17 14:00:04 +0000 UTC,FinishedAt:2025-03-17 14:00:14 +0000 UTC,ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671 Started:0xc0020858f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002254d20} {Name:kube-api-access-lhzw4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002254d30}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0317 14:00:15.534037  684423 pod_ready.go:82] duration metric: took 11.50541352s for pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace to be "Ready" ...
	E0317 14:00:15.534049  684423 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-gvs7t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:15 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-03-17 14:00:03 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.172 HostIPs:[{IP:192.168.39.172}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-03-17 14:00:03 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-03-17 14:00:04 +0000 UTC,FinishedAt:2025-03-17 14:00:14 +0000 UTC,ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://a9392da0a1a0a548796c19285db75bbfe071219c3b971d1a2482d18a86574671 Started:0xc0020858f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002254d20} {Name:kube-api-access-lhzw4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc002254d30}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0317 14:00:15.534059  684423 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.537848  684423 pod_ready.go:93] pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.537881  684423 pod_ready.go:82] duration metric: took 3.813823ms for pod "coredns-668d6bf9bc-r8ngr" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.537896  684423 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.540753  684423 pod_ready.go:93] pod "etcd-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.540773  684423 pod_ready.go:82] duration metric: took 2.869364ms for pod "etcd-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.540784  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.543734  684423 pod_ready.go:93] pod "kube-apiserver-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.543759  684423 pod_ready.go:82] duration metric: took 2.967445ms for pod "kube-apiserver-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.543771  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.546758  684423 pod_ready.go:93] pod "kube-controller-manager-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.546781  684423 pod_ready.go:82] duration metric: took 3.002194ms for pod "kube-controller-manager-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.546792  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-kj4kx" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.933502  684423 pod_ready.go:93] pod "kube-proxy-kj4kx" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:15.933536  684423 pod_ready.go:82] duration metric: took 386.736479ms for pod "kube-proxy-kj4kx" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:15.933546  684423 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:16.331873  684423 pod_ready.go:93] pod "kube-scheduler-bridge-788750" in "kube-system" namespace has status "Ready":"True"
	I0317 14:00:16.331907  684423 pod_ready.go:82] duration metric: took 398.352288ms for pod "kube-scheduler-bridge-788750" in "kube-system" namespace to be "Ready" ...
	I0317 14:00:16.331919  684423 pod_ready.go:39] duration metric: took 12.307539787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 14:00:16.331944  684423 api_server.go:52] waiting for apiserver process to appear ...
	I0317 14:00:16.332022  684423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 14:00:16.346729  684423 api_server.go:72] duration metric: took 12.894460738s to wait for apiserver process to appear ...
	I0317 14:00:16.346763  684423 api_server.go:88] waiting for apiserver healthz status ...
	I0317 14:00:16.346789  684423 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0317 14:00:16.351864  684423 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0317 14:00:16.352901  684423 api_server.go:141] control plane version: v1.32.2
	I0317 14:00:16.352933  684423 api_server.go:131] duration metric: took 6.160385ms to wait for apiserver health ...
	I0317 14:00:16.352944  684423 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 14:00:16.532789  684423 system_pods.go:59] 7 kube-system pods found
	I0317 14:00:16.532831  684423 system_pods.go:61] "coredns-668d6bf9bc-r8ngr" [4745e44f-92f0-4965-8322-547a570326b8] Running
	I0317 14:00:16.532839  684423 system_pods.go:61] "etcd-bridge-788750" [3c21b3bd-7442-4e7d-b74f-d67a6e8f0094] Running
	I0317 14:00:16.532845  684423 system_pods.go:61] "kube-apiserver-bridge-788750" [ae0d5960-d302-4928-90e0-83b4938145c2] Running
	I0317 14:00:16.532851  684423 system_pods.go:61] "kube-controller-manager-bridge-788750" [b150c081-ba22-45da-b020-0e38fbb646b8] Running
	I0317 14:00:16.532856  684423 system_pods.go:61] "kube-proxy-kj4kx" [dd396806-07b3-4394-9ac2-038bbadaad2d] Running
	I0317 14:00:16.532862  684423 system_pods.go:61] "kube-scheduler-bridge-788750" [e7e68119-8c5c-4a4d-bd23-a64d3dbee81a] Running
	I0317 14:00:16.532867  684423 system_pods.go:61] "storage-provisioner" [44001b48-9220-4402-9fb4-1c662a5d512e] Running
	I0317 14:00:16.532874  684423 system_pods.go:74] duration metric: took 179.923311ms to wait for pod list to return data ...
	I0317 14:00:16.532887  684423 default_sa.go:34] waiting for default service account to be created ...
	I0317 14:00:16.732122  684423 default_sa.go:45] found service account: "default"
	I0317 14:00:16.732153  684423 default_sa.go:55] duration metric: took 199.259905ms for default service account to be created ...
	I0317 14:00:16.732164  684423 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 14:00:16.932578  684423 system_pods.go:86] 7 kube-system pods found
	I0317 14:00:16.932613  684423 system_pods.go:89] "coredns-668d6bf9bc-r8ngr" [4745e44f-92f0-4965-8322-547a570326b8] Running
	I0317 14:00:16.932620  684423 system_pods.go:89] "etcd-bridge-788750" [3c21b3bd-7442-4e7d-b74f-d67a6e8f0094] Running
	I0317 14:00:16.932626  684423 system_pods.go:89] "kube-apiserver-bridge-788750" [ae0d5960-d302-4928-90e0-83b4938145c2] Running
	I0317 14:00:16.932629  684423 system_pods.go:89] "kube-controller-manager-bridge-788750" [b150c081-ba22-45da-b020-0e38fbb646b8] Running
	I0317 14:00:16.932633  684423 system_pods.go:89] "kube-proxy-kj4kx" [dd396806-07b3-4394-9ac2-038bbadaad2d] Running
	I0317 14:00:16.932637  684423 system_pods.go:89] "kube-scheduler-bridge-788750" [e7e68119-8c5c-4a4d-bd23-a64d3dbee81a] Running
	I0317 14:00:16.932642  684423 system_pods.go:89] "storage-provisioner" [44001b48-9220-4402-9fb4-1c662a5d512e] Running
	I0317 14:00:16.932652  684423 system_pods.go:126] duration metric: took 200.479611ms to wait for k8s-apps to be running ...
	I0317 14:00:16.932663  684423 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 14:00:16.932722  684423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 14:00:16.946942  684423 system_svc.go:56] duration metric: took 14.265343ms WaitForService to wait for kubelet
	I0317 14:00:16.946983  684423 kubeadm.go:582] duration metric: took 13.494722207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 14:00:16.947005  684423 node_conditions.go:102] verifying NodePressure condition ...
	I0317 14:00:17.132068  684423 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 14:00:17.132107  684423 node_conditions.go:123] node cpu capacity is 2
	I0317 14:00:17.132125  684423 node_conditions.go:105] duration metric: took 185.114846ms to run NodePressure ...
	I0317 14:00:17.132143  684423 start.go:241] waiting for startup goroutines ...
	I0317 14:00:17.132153  684423 start.go:246] waiting for cluster config update ...
	I0317 14:00:17.132170  684423 start.go:255] writing updated cluster config ...
	I0317 14:00:17.132492  684423 ssh_runner.go:195] Run: rm -f paused
	I0317 14:00:17.180890  684423 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 14:00:17.183880  684423 out.go:177] * Done! kubectl is now configured to use "bridge-788750" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.157746900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220904157726431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7011b55c-61b1-4d95-898a-eb2d1a1d2714 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.158403884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2752bd9f-61cb-44b3-8e5e-733da9958e2c name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.158487837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2752bd9f-61cb-44b3-8e5e-733da9958e2c name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.158540213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2752bd9f-61cb-44b3-8e5e-733da9958e2c name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.190941243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78b61b27-4644-4b1a-87f7-04f5ec423b8e name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.191036969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78b61b27-4644-4b1a-87f7-04f5ec423b8e name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.192476963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=615f1947-e9b1-4bf4-9213-95b193710a96 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.193051493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220904193020500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=615f1947-e9b1-4bf4-9213-95b193710a96 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.193687389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=190c703e-76e7-454c-bddd-1a20e5f8de6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.193754308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=190c703e-76e7-454c-bddd-1a20e5f8de6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.193792573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=190c703e-76e7-454c-bddd-1a20e5f8de6d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.226367432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfa41719-4bdc-4a47-99eb-6856a4e383e8 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.226482879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfa41719-4bdc-4a47-99eb-6856a4e383e8 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.227750008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=256582ce-5ea3-42d8-9a21-72bb2e2d2330 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.228269161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220904228236618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=256582ce-5ea3-42d8-9a21-72bb2e2d2330 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.228962784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95ae9ad7-00ae-4aa1-a7cd-bcf228718f7d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.229048954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95ae9ad7-00ae-4aa1-a7cd-bcf228718f7d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.229098356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=95ae9ad7-00ae-4aa1-a7cd-bcf228718f7d name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.262741326Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ba61361-4d00-43ac-8542-ff2a46078678 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.262850704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ba61361-4d00-43ac-8542-ff2a46078678 name=/runtime.v1.RuntimeService/Version
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.263910960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef2123ea-45b0-4fec-92ae-258e7e545454 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.264427636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742220904264402869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef2123ea-45b0-4fec-92ae-258e7e545454 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.264989839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1985b01f-e369-4e95-a76b-d5295cfcecdd name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.265071198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1985b01f-e369-4e95-a76b-d5295cfcecdd name=/runtime.v1.RuntimeService/ListContainers
	Mar 17 14:15:04 old-k8s-version-803027 crio[632]: time="2025-03-17 14:15:04.265118443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1985b01f-e369-4e95-a76b-d5295cfcecdd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar17 13:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049623] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037597] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.930656] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar17 13:52] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.058241] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060217] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.180401] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.139897] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.253988] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.875702] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.060155] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.298688] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.779171] kauditd_printk_skb: 46 callbacks suppressed
	[Mar17 13:56] systemd-fstab-generator[5028]: Ignoring "noauto" option for root device
	[Mar17 13:58] systemd-fstab-generator[5303]: Ignoring "noauto" option for root device
	[  +0.074533] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:15:04 up 23 min,  0 users,  load average: 0.00, 0.01, 0.01
	Linux old-k8s-version-803027 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b786f0)
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bb5ef0, 0x4f0ac20, 0xc000451590, 0x1, 0xc00009e0c0)
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000246380, 0xc00009e0c0)
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000715070, 0xc000adb540)
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 17 14:14:59 old-k8s-version-803027 kubelet[7133]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 17 14:14:59 old-k8s-version-803027 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 17 14:14:59 old-k8s-version-803027 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 17 14:15:00 old-k8s-version-803027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Mar 17 14:15:00 old-k8s-version-803027 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 17 14:15:00 old-k8s-version-803027 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 17 14:15:00 old-k8s-version-803027 kubelet[7142]: I0317 14:15:00.547714    7142 server.go:416] Version: v1.20.0
	Mar 17 14:15:00 old-k8s-version-803027 kubelet[7142]: I0317 14:15:00.547967    7142 server.go:837] Client rotation is on, will bootstrap in background
	Mar 17 14:15:00 old-k8s-version-803027 kubelet[7142]: I0317 14:15:00.549662    7142 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 17 14:15:00 old-k8s-version-803027 kubelet[7142]: W0317 14:15:00.550523    7142 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 17 14:15:00 old-k8s-version-803027 kubelet[7142]: I0317 14:15:00.550600    7142 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
E0317 14:15:04.591487  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/flannel-788750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 2 (235.984554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-803027" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (353.06s)

                                                
                                    

Test pass (277/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 5.01
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 59.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 126.36
31 TestAddons/serial/GCPAuth/Namespaces 1.85
32 TestAddons/serial/GCPAuth/FakeCredentials 11.47
35 TestAddons/parallel/Registry 18.66
37 TestAddons/parallel/InspektorGadget 11.94
38 TestAddons/parallel/MetricsServer 6.04
40 TestAddons/parallel/CSI 49.82
41 TestAddons/parallel/Headlamp 20.02
42 TestAddons/parallel/CloudSpanner 5.91
43 TestAddons/parallel/LocalPath 55.78
44 TestAddons/parallel/NvidiaDevicePlugin 6.77
45 TestAddons/parallel/Yakd 12.32
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 46.38
49 TestCertExpiration 291.55
51 TestForceSystemdFlag 61.77
52 TestForceSystemdEnv 63.43
54 TestKVMDriverInstallOrUpdate 6.22
58 TestErrorSpam/setup 41.88
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.74
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.58
63 TestErrorSpam/stop 4.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.07
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 54.5
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.98
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.61
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.28
87 TestFunctional/serial/InvalidService 4.04
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 14.82
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.82
97 TestFunctional/parallel/ServiceCmdConnect 22.69
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 33.02
101 TestFunctional/parallel/SSHCmd 0.43
102 TestFunctional/parallel/CpCmd 1.25
103 TestFunctional/parallel/MySQL 24.68
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.25
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
119 TestFunctional/parallel/ImageCommands/Setup 1.6
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.63
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.26
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.05
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
133 TestFunctional/parallel/ProfileCmd/profile_list 0.35
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
137 TestFunctional/parallel/ImageCommands/ImageRemove 1.77
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.69
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
146 TestFunctional/parallel/ServiceCmd/DeployApp 15.43
147 TestFunctional/parallel/MountCmd/any-port 9.55
148 TestFunctional/parallel/ServiceCmd/List 0.83
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.86
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
151 TestFunctional/parallel/MountCmd/specific-port 1.88
152 TestFunctional/parallel/ServiceCmd/Format 0.51
153 TestFunctional/parallel/ServiceCmd/URL 0.51
154 TestFunctional/parallel/MountCmd/VerifyCleanup 0.95
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 191
163 TestMultiControlPlane/serial/DeployApp 8.05
164 TestMultiControlPlane/serial/PingHostFromPods 1.12
165 TestMultiControlPlane/serial/AddWorkerNode 56.9
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 13.04
169 TestMultiControlPlane/serial/StopSecondaryNode 91.61
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
171 TestMultiControlPlane/serial/RestartSecondaryNode 44.71
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 428.71
174 TestMultiControlPlane/serial/DeleteSecondaryNode 17.99
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
176 TestMultiControlPlane/serial/StopCluster 272.91
177 TestMultiControlPlane/serial/RestartCluster 120.99
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
179 TestMultiControlPlane/serial/AddSecondaryNode 74.63
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
184 TestJSONOutput/start/Command 55.82
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.66
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.59
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.35
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 84.9
216 TestMountStart/serial/StartWithMountFirst 26.91
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 26.91
219 TestMountStart/serial/VerifyMountSecond 0.37
220 TestMountStart/serial/DeleteFirst 0.88
221 TestMountStart/serial/VerifyMountPostDelete 0.39
222 TestMountStart/serial/Stop 1.28
223 TestMountStart/serial/RestartStopped 22.3
224 TestMountStart/serial/VerifyMountPostStop 0.37
227 TestMultiNode/serial/FreshStart2Nodes 108.31
228 TestMultiNode/serial/DeployApp2Nodes 5.98
229 TestMultiNode/serial/PingHostFrom2Pods 0.76
230 TestMultiNode/serial/AddNode 50.09
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.57
233 TestMultiNode/serial/CopyFile 7.21
234 TestMultiNode/serial/StopNode 2.24
235 TestMultiNode/serial/StartAfterStop 84.42
236 TestMultiNode/serial/RestartKeepsNodes 340.84
237 TestMultiNode/serial/DeleteNode 2.62
238 TestMultiNode/serial/StopMultiNode 182.05
239 TestMultiNode/serial/RestartMultiNode 112.95
240 TestMultiNode/serial/ValidateNameConflict 39.78
247 TestScheduledStopUnix 114.01
251 TestRunningBinaryUpgrade 191.79
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 97.35
258 TestStoppedBinaryUpgrade/Setup 0.67
259 TestStoppedBinaryUpgrade/Upgrade 145.24
260 TestNoKubernetes/serial/StartWithStopK8s 38.46
261 TestNoKubernetes/serial/Start 29.04
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
263 TestNoKubernetes/serial/ProfileList 16.29
264 TestNoKubernetes/serial/Stop 1.29
265 TestNoKubernetes/serial/StartNoArgs 20.67
274 TestPause/serial/Start 60.37
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
284 TestNetworkPlugins/group/false 3.13
292 TestStartStop/group/no-preload/serial/FirstStart 90.18
294 TestStartStop/group/embed-certs/serial/FirstStart 58.9
295 TestStartStop/group/no-preload/serial/DeployApp 11.9
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
297 TestStartStop/group/no-preload/serial/Stop 91.13
298 TestStartStop/group/embed-certs/serial/DeployApp 10.26
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
300 TestStartStop/group/embed-certs/serial/Stop 91.15
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.63
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/no-preload/serial/SecondStart 386.45
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.81
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/embed-certs/serial/SecondStart 295.83
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.92
314 TestStartStop/group/old-k8s-version/serial/Stop 4.39
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
320 TestStartStop/group/embed-certs/serial/Pause 2.62
322 TestStartStop/group/newest-cni/serial/FirstStart 48.21
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
326 TestStartStop/group/newest-cni/serial/Stop 10.53
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/newest-cni/serial/SecondStart 37.91
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/no-preload/serial/Pause 2.76
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
333 TestNetworkPlugins/group/auto/Start 72.53
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.54
337 TestNetworkPlugins/group/kindnet/Start 96.03
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
341 TestStartStop/group/newest-cni/serial/Pause 2.51
342 TestNetworkPlugins/group/calico/Start 113.15
343 TestNetworkPlugins/group/auto/KubeletFlags 0.23
344 TestNetworkPlugins/group/auto/NetCatPod 14.28
345 TestNetworkPlugins/group/auto/DNS 0.14
346 TestNetworkPlugins/group/auto/Localhost 0.12
347 TestNetworkPlugins/group/auto/HairPin 0.11
348 TestNetworkPlugins/group/custom-flannel/Start 72.16
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
351 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
352 TestNetworkPlugins/group/kindnet/DNS 0.13
353 TestNetworkPlugins/group/kindnet/Localhost 0.11
354 TestNetworkPlugins/group/kindnet/HairPin 0.12
355 TestNetworkPlugins/group/enable-default-cni/Start 54.03
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.23
358 TestNetworkPlugins/group/calico/NetCatPod 10.27
359 TestNetworkPlugins/group/calico/DNS 0.14
360 TestNetworkPlugins/group/calico/Localhost 0.12
361 TestNetworkPlugins/group/calico/HairPin 0.12
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
364 TestNetworkPlugins/group/flannel/Start 74.6
365 TestNetworkPlugins/group/custom-flannel/DNS 0.17
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
370 TestNetworkPlugins/group/bridge/Start 62.26
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
374 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
377 TestNetworkPlugins/group/flannel/NetCatPod 10.24
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
379 TestNetworkPlugins/group/bridge/NetCatPod 10.21
380 TestNetworkPlugins/group/flannel/DNS 0.15
381 TestNetworkPlugins/group/flannel/Localhost 0.12
382 TestNetworkPlugins/group/flannel/HairPin 0.12
383 TestNetworkPlugins/group/bridge/DNS 0.18
384 TestNetworkPlugins/group/bridge/Localhost 0.13
385 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-793997 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-793997 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.656343233s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0317 12:41:29.210742  629188 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0317 12:41:29.210863  629188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-793997
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-793997: exit status 85 (61.796163ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-793997 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |          |
	|         | -p download-only-793997        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:41:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:41:20.595207  629200 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:41:20.595321  629200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:20.595329  629200 out.go:358] Setting ErrFile to fd 2...
	I0317 12:41:20.595333  629200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:20.595583  629200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	W0317 12:41:20.595713  629200 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20539-621978/.minikube/config/config.json: open /home/jenkins/minikube-integration/20539-621978/.minikube/config/config.json: no such file or directory
	I0317 12:41:20.596270  629200 out.go:352] Setting JSON to true
	I0317 12:41:20.597241  629200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8625,"bootTime":1742206656,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:41:20.597345  629200 start.go:139] virtualization: kvm guest
	I0317 12:41:20.599462  629200 out.go:97] [download-only-793997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:41:20.599596  629200 notify.go:220] Checking for updates...
	W0317 12:41:20.599586  629200 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball: no such file or directory
	I0317 12:41:20.600887  629200 out.go:169] MINIKUBE_LOCATION=20539
	I0317 12:41:20.602158  629200 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:41:20.603466  629200 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:41:20.604612  629200 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:41:20.605879  629200 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0317 12:41:20.608135  629200 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 12:41:20.608343  629200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:41:20.643790  629200 out.go:97] Using the kvm2 driver based on user configuration
	I0317 12:41:20.643840  629200 start.go:297] selected driver: kvm2
	I0317 12:41:20.643852  629200 start.go:901] validating driver "kvm2" against <nil>
	I0317 12:41:20.644384  629200 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:41:20.644508  629200 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0317 12:41:20.662426  629200 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0317 12:41:20.662508  629200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:41:20.663263  629200 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0317 12:41:20.663482  629200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 12:41:20.663520  629200 cni.go:84] Creating CNI manager for ""
	I0317 12:41:20.663603  629200 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0317 12:41:20.663617  629200 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 12:41:20.663686  629200 start.go:340] cluster config:
	{Name:download-only-793997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-793997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:41:20.663915  629200 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:41:20.665908  629200 out.go:97] Downloading VM boot image ...
	I0317 12:41:20.665952  629200 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0317 12:41:24.387432  629200 out.go:97] Starting "download-only-793997" primary control-plane node in "download-only-793997" cluster
	I0317 12:41:24.387463  629200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 12:41:24.415643  629200 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0317 12:41:24.415671  629200 cache.go:56] Caching tarball of preloaded images
	I0317 12:41:24.415841  629200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0317 12:41:24.417316  629200 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0317 12:41:24.417333  629200 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0317 12:41:24.444635  629200 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-793997 host does not exist
	  To start a cluster, run: "minikube start -p download-only-793997"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-793997
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-534794 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-534794 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.006144698s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0317 12:41:34.539148  629188 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0317 12:41:34.539202  629188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-534794
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-534794: exit status 85 (61.38424ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-793997 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |                     |
	|         | -p download-only-793997        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| delete  | -p download-only-793997        | download-only-793997 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
	| start   | -o=json --download-only        | download-only-534794 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC |                     |
	|         | -p download-only-534794        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:41:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:41:29.571737  629407 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:41:29.571998  629407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:29.572007  629407 out.go:358] Setting ErrFile to fd 2...
	I0317 12:41:29.572011  629407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:41:29.572198  629407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 12:41:29.572740  629407 out.go:352] Setting JSON to true
	I0317 12:41:29.573683  629407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8634,"bootTime":1742206656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:41:29.573748  629407 start.go:139] virtualization: kvm guest
	I0317 12:41:29.575703  629407 out.go:97] [download-only-534794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:41:29.575857  629407 notify.go:220] Checking for updates...
	I0317 12:41:29.577375  629407 out.go:169] MINIKUBE_LOCATION=20539
	I0317 12:41:29.578868  629407 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:41:29.580243  629407 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:41:29.581685  629407 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:41:29.582916  629407 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-534794 host does not exist
	  To start a cluster, run: "minikube start -p download-only-534794"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-534794
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0317 12:41:35.113088  629188 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-652834 --alsologtostderr --binary-mirror http://127.0.0.1:41719 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-652834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-652834
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (59.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-213183 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-213183 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (58.115869067s)
helpers_test.go:175: Cleaning up "offline-crio-213183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-213183
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-213183: (1.527318887s)
--- PASS: TestOffline (59.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012915
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-012915: exit status 85 (54.270962ms)

                                                
                                                
-- stdout --
	* Profile "addons-012915" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012915"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012915
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-012915: exit status 85 (54.744335ms)

                                                
                                                
-- stdout --
	* Profile "addons-012915" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012915"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (126.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-012915 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-012915 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.357860325s)
--- PASS: TestAddons/Setup (126.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-012915 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-012915 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-012915 get secret gcp-auth -n new-namespace: exit status 1 (97.315999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-012915 logs -l app=gcp-auth -n gcp-auth
I0317 12:43:42.633999  629188 retry.go:31] will retry after 1.57084345s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/03/17 12:43:41 GCP Auth Webhook started!
	2025/03/17 12:43:42 Ready to marshal response ...
	2025/03/17 12:43:42 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-012915 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-012915 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-012915 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [86b0f221-352a-43ab-8627-f3bd097570e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [86b0f221-352a-43ab-8627-f3bd097570e7] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003728307s
addons_test.go:633: (dbg) Run:  kubectl --context addons-012915 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-012915 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-012915 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.308509ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-8k6gk" [f211c29d-606d-447b-a8fa-69017766f2db] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003138094s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7r5g2" [ada328aa-4416-4e30-a5df-7dc790f2663a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015518434s
addons_test.go:331: (dbg) Run:  kubectl --context addons-012915 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-012915 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-012915 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.702186171s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 ip
2025/03/17 12:44:22 [DEBUG] GET http://192.168.39.84:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-htpz8" [42de9d68-579a-426b-9376-4b8d87655630] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.063365318s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable inspektor-gadget --alsologtostderr -v=1: (5.870540207s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 26.243865ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-p2svs" [7cb96dd5-6d04-4b62-a0c5-af14472757d1] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003668976s
addons_test.go:402: (dbg) Run:  kubectl --context addons-012915 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0317 12:44:17.133070  629188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 12:44:17.136920  629188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 12:44:17.136943  629188 kapi.go:107] duration metric: took 3.893493ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.902425ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-012915 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-012915 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9788603d-5129-487f-a6d0-09cea4d86f89] Pending
helpers_test.go:344: "task-pv-pod" [9788603d-5129-487f-a6d0-09cea4d86f89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9788603d-5129-487f-a6d0-09cea4d86f89] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.005961209s
addons_test.go:511: (dbg) Run:  kubectl --context addons-012915 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012915 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012915 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-012915 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-012915 delete pod task-pv-pod: (1.225889555s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-012915 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-012915 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-012915 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5eeb38fe-0c9f-4504-9741-270bd6332865] Pending
helpers_test.go:344: "task-pv-pod-restore" [5eeb38fe-0c9f-4504-9741-270bd6332865] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5eeb38fe-0c9f-4504-9741-270bd6332865] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003181197s
addons_test.go:553: (dbg) Run:  kubectl --context addons-012915 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-012915 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-012915 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable volumesnapshots --alsologtostderr -v=1: (1.011343086s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.701996392s)
--- PASS: TestAddons/parallel/CSI (49.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-012915 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-012915 --alsologtostderr -v=1: (1.149265003s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-55dtz" [385d34fc-c6bc-429b-9ddb-2f4ae9f3dc15] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-55dtz" [385d34fc-c6bc-429b-9ddb-2f4ae9f3dc15] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-55dtz" [385d34fc-c6bc-429b-9ddb-2f4ae9f3dc15] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-55dtz" [385d34fc-c6bc-429b-9ddb-2f4ae9f3dc15] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004193301s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable headlamp --alsologtostderr -v=1: (5.863481554s)
--- PASS: TestAddons/parallel/Headlamp (20.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-2z5qz" [ced9e736-b94b-42f5-ae21-f4ca3a8f8c36] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011600615s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-012915 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-012915 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b087c292-5570-40ec-8c1c-c3d04c8b2bd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b087c292-5570-40ec-8c1c-c3d04c8b2bd1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b087c292-5570-40ec-8c1c-c3d04c8b2bd1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003611894s
addons_test.go:906: (dbg) Run:  kubectl --context addons-012915 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 ssh "cat /opt/local-path-provisioner/pvc-dfd0802a-c635-46e0-a42e-5cc628c5aa4b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-012915 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-012915 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.944252637s)
--- PASS: TestAddons/parallel/LocalPath (55.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gr4p2" [c678dd53-1e45-417a-b06d-c754b6a9ace2] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002930105s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.77s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-ljdfv" [89155d65-8566-4110-8e43-d2a033dd05a0] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003357653s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable yakd --alsologtostderr -v=1: (6.313798569s)
--- PASS: TestAddons/parallel/Yakd (12.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-012915
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-012915: (1m30.967256552s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012915
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012915
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-012915
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (46.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-197082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-197082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.91456569s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-197082 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-197082 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-197082 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-197082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-197082
--- PASS: TestCertOptions (46.38s)

                                                
                                    
x
+
TestCertExpiration (291.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355456 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0317 13:43:44.452849  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355456 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.782270029s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355456 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355456 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (37.928204026s)
helpers_test.go:175: Cleaning up "cert-expiration-355456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-355456
--- PASS: TestCertExpiration (291.55s)

                                                
                                    
x
+
TestForceSystemdFlag (61.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-638911 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-638911 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.498278962s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-638911 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-638911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-638911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-638911: (1.066622055s)
--- PASS: TestForceSystemdFlag (61.77s)

                                                
                                    
x
+
TestForceSystemdEnv (63.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-662195 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-662195 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.414207352s)
helpers_test.go:175: Cleaning up "force-systemd-env-662195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-662195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-662195: (1.017461296s)
--- PASS: TestForceSystemdEnv (63.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0317 13:43:09.428962  629188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 13:43:09.429151  629188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0317 13:43:09.457812  629188 install.go:62] docker-machine-driver-kvm2: exit status 1
W0317 13:43:09.458032  629188 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 13:43:09.458086  629188 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2277918816/001/docker-machine-driver-kvm2
I0317 13:43:09.686297  629188 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2277918816/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005e26d8 gz:0xc0005e2780 tar:0xc0005e2720 tar.bz2:0xc0005e2730 tar.gz:0xc0005e2740 tar.xz:0xc0005e2750 tar.zst:0xc0005e2760 tbz2:0xc0005e2730 tgz:0xc0005e2740 txz:0xc0005e2750 tzst:0xc0005e2760 xz:0xc0005e2788 zip:0xc0005e27a0 zst:0xc0005e27b0] Getters:map[file:0xc001699e20 http:0xc0006bf590 https:0xc0006bf5e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 13:43:09.686348  629188 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2277918816/001/docker-machine-driver-kvm2
I0317 13:43:13.942414  629188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 13:43:13.942518  629188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0317 13:43:13.975152  629188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0317 13:43:13.975183  629188 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0317 13:43:13.975249  629188 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 13:43:13.975281  629188 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2277918816/002/docker-machine-driver-kvm2
I0317 13:43:14.028206  629188 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2277918816/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005e26d8 gz:0xc0005e2780 tar:0xc0005e2720 tar.bz2:0xc0005e2730 tar.gz:0xc0005e2740 tar.xz:0xc0005e2750 tar.zst:0xc0005e2760 tbz2:0xc0005e2730 tgz:0xc0005e2740 txz:0xc0005e2750 tzst:0xc0005e2760 xz:0xc0005e2788 zip:0xc0005e27a0 zst:0xc0005e27b0] Getters:map[file:0xc0006b63d0 http:0xc000073860 https:0xc000073950] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 13:43:14.028248  629188 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2277918816/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.22s)

                                                
                                    
x
+
TestErrorSpam/setup (41.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-644602 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-644602 --driver=kvm2  --container-runtime=crio
E0317 12:48:44.455779  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.462263  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.473704  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.495202  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.536645  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.618134  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:44.779709  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:45.101383  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:45.743496  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:47.025166  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:49.588112  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:48:54.710110  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:49:04.951583  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-644602 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-644602 --driver=kvm2  --container-runtime=crio: (41.882180982s)
--- PASS: TestErrorSpam/setup (41.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 unpause
E0317 12:49:25.434083  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (4.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop: (1.592300893s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop: (1.345158364s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-644602 --log_dir /tmp/nospam-644602 stop: (1.55354385s)
--- PASS: TestErrorSpam/stop (4.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20539-621978/.minikube/files/etc/test/nested/copy/629188/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0317 12:50:06.395725  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-141794 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.07412172s)
--- PASS: TestFunctional/serial/StartWithProxy (58.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0317 12:50:29.304069  629188 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-141794 --alsologtostderr -v=8: (54.497219489s)
functional_test.go:680: soft start took 54.498042525s for "functional-141794" cluster.
I0317 12:51:23.801624  629188 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (54.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-141794 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:3.1: (1.130292754s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:3.3: (1.171277839s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 cache add registry.k8s.io/pause:latest: (1.141850425s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-141794 /tmp/TestFunctionalserialCacheCmdcacheadd_local3796004440/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache add minikube-local-cache-test:functional-141794
E0317 12:51:28.317597  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 cache add minikube-local-cache-test:functional-141794: (1.671105056s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache delete minikube-local-cache-test:functional-141794
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-141794
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.332469ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 cache reload: (1.047399571s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 kubectl -- --context functional-141794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-141794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-141794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.608663187s)
functional_test.go:778: restart took 33.608796204s for "functional-141794" cluster.
I0317 12:52:05.357424  629188 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (33.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-141794 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 logs: (1.340305336s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 logs --file /tmp/TestFunctionalserialLogsFileCmd2503243003/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 logs --file /tmp/TestFunctionalserialLogsFileCmd2503243003/001/logs.txt: (1.283285877s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-141794 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-141794
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-141794: exit status 115 (279.899345ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.221:31756 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-141794 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 config get cpus: exit status 14 (53.901342ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 config get cpus: exit status 14 (48.637264ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-141794 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-141794 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 638068: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-141794 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.101483ms)

                                                
                                                
-- stdout --
	* [functional-141794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:52:47.230294  637546 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:52:47.230639  637546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:47.230653  637546 out.go:358] Setting ErrFile to fd 2...
	I0317 12:52:47.230661  637546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:47.230951  637546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 12:52:47.231734  637546 out.go:352] Setting JSON to false
	I0317 12:52:47.233148  637546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9311,"bootTime":1742206656,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:52:47.233247  637546 start.go:139] virtualization: kvm guest
	I0317 12:52:47.235321  637546 out.go:177] * [functional-141794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:52:47.236841  637546 notify.go:220] Checking for updates...
	I0317 12:52:47.236908  637546 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:52:47.238453  637546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:52:47.239916  637546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:52:47.241489  637546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:52:47.242923  637546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:52:47.244170  637546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:52:47.246121  637546 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:52:47.246784  637546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:52:47.246854  637546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:52:47.262615  637546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I0317 12:52:47.263245  637546 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:52:47.263844  637546 main.go:141] libmachine: Using API Version  1
	I0317 12:52:47.263873  637546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:52:47.264303  637546 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:52:47.264498  637546 main.go:141] libmachine: (functional-141794) Calling .DriverName
	I0317 12:52:47.264827  637546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:52:47.265362  637546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:52:47.265415  637546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:52:47.285634  637546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I0317 12:52:47.286040  637546 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:52:47.286527  637546 main.go:141] libmachine: Using API Version  1
	I0317 12:52:47.286560  637546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:52:47.286991  637546 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:52:47.287191  637546 main.go:141] libmachine: (functional-141794) Calling .DriverName
	I0317 12:52:47.321610  637546 out.go:177] * Using the kvm2 driver based on existing profile
	I0317 12:52:47.322777  637546 start.go:297] selected driver: kvm2
	I0317 12:52:47.322795  637546 start.go:901] validating driver "kvm2" against &{Name:functional-141794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-141794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:52:47.322908  637546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:52:47.325004  637546 out.go:201] 
	W0317 12:52:47.326373  637546 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0317 12:52:47.327651  637546 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141794 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-141794 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.664306ms)

                                                
                                                
-- stdout --
	* [functional-141794] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:52:37.373804  636860 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:52:37.374068  636860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:37.374077  636860 out.go:358] Setting ErrFile to fd 2...
	I0317 12:52:37.374081  636860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:37.374398  636860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 12:52:37.374948  636860 out.go:352] Setting JSON to false
	I0317 12:52:37.375959  636860 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9301,"bootTime":1742206656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:52:37.376080  636860 start.go:139] virtualization: kvm guest
	I0317 12:52:37.378078  636860 out.go:177] * [functional-141794] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0317 12:52:37.379790  636860 notify.go:220] Checking for updates...
	I0317 12:52:37.379813  636860 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:52:37.381322  636860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:52:37.382587  636860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 12:52:37.383886  636860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 12:52:37.385071  636860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:52:37.386282  636860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:52:37.388109  636860 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:52:37.388738  636860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:52:37.388840  636860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:52:37.404497  636860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0317 12:52:37.405000  636860 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:52:37.405528  636860 main.go:141] libmachine: Using API Version  1
	I0317 12:52:37.405553  636860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:52:37.405941  636860 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:52:37.406143  636860 main.go:141] libmachine: (functional-141794) Calling .DriverName
	I0317 12:52:37.406503  636860 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:52:37.406853  636860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:52:37.406904  636860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:52:37.422600  636860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I0317 12:52:37.423233  636860 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:52:37.423749  636860 main.go:141] libmachine: Using API Version  1
	I0317 12:52:37.423771  636860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:52:37.424175  636860 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:52:37.424374  636860 main.go:141] libmachine: (functional-141794) Calling .DriverName
	I0317 12:52:37.457715  636860 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0317 12:52:37.458913  636860 start.go:297] selected driver: kvm2
	I0317 12:52:37.458932  636860 start.go:901] validating driver "kvm2" against &{Name:functional-141794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-141794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:52:37.459023  636860 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:52:37.461099  636860 out.go:201] 
	W0317 12:52:37.462219  636860 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0317 12:52:37.463394  636860 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-141794 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-141794 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-d5c46" [2dd16531-95c6-45df-9e01-33df169674e9] Pending
helpers_test.go:344: "hello-node-connect-58f9cf68d8-d5c46" [2dd16531-95c6-45df-9e01-33df169674e9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-d5c46" [2dd16531-95c6-45df-9e01-33df169674e9] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.003157542s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.221:32375
functional_test.go:1692: http://192.168.39.221:32375: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-d5c46

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.221:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.221:32375
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7653a4ba-899e-415f-a407-b491a2cc4116] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.289558539s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-141794 get storageclass -o=json
I0317 12:52:22.830314  629188 kapi.go:150] Service nginx-svc in namespace default found.
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-141794 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-141794 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-141794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f3753ddc-2eba-415c-add2-b418891966ed] Pending
helpers_test.go:344: "sp-pod" [f3753ddc-2eba-415c-add2-b418891966ed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f3753ddc-2eba-415c-add2-b418891966ed] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003759421s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-141794 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-141794 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-141794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [17891ee4-42aa-42c8-8b6e-acd1cc97f9b7] Pending
helpers_test.go:344: "sp-pod" [17891ee4-42aa-42c8-8b6e-acd1cc97f9b7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [17891ee4-42aa-42c8-8b6e-acd1cc97f9b7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.059665564s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-141794 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh -n functional-141794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cp functional-141794:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd659738017/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh -n functional-141794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh -n functional-141794 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-141794 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-j7928" [d30f9fa2-e4c0-4813-beed-0400625fe5f8] Pending
helpers_test.go:344: "mysql-58ccfd96bb-j7928" [d30f9fa2-e4c0-4813-beed-0400625fe5f8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-j7928" [d30f9fa2-e4c0-4813-beed-0400625fe5f8] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003810155s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-141794 exec mysql-58ccfd96bb-j7928 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-141794 exec mysql-58ccfd96bb-j7928 -- mysql -ppassword -e "show databases;": exit status 1 (137.031283ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 12:52:35.998010  629188 retry.go:31] will retry after 1.183638306s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-141794 exec mysql-58ccfd96bb-j7928 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/629188/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /etc/test/nested/copy/629188/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/629188.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /etc/ssl/certs/629188.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/629188.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /usr/share/ca-certificates/629188.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/6291882.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /etc/ssl/certs/6291882.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/6291882.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /usr/share/ca-certificates/6291882.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-141794 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh "sudo systemctl is-active docker": exit status 1 (222.133002ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh "sudo systemctl is-active containerd": exit status 1 (232.197601ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141794 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-141794
localhost/kicbase/echo-server:functional-141794
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141794 image ls --format short --alsologtostderr:
I0317 12:52:49.347153  638024 out.go:345] Setting OutFile to fd 1 ...
I0317 12:52:49.347436  638024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.347446  638024 out.go:358] Setting ErrFile to fd 2...
I0317 12:52:49.347450  638024 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.347708  638024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:52:49.348261  638024 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.348357  638024 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.348711  638024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.348768  638024 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.367246  638024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
I0317 12:52:49.368122  638024 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.368757  638024 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.368798  638024 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.369322  638024 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.369588  638024 main.go:141] libmachine: (functional-141794) Calling .GetState
I0317 12:52:49.371793  638024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.371903  638024 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.388475  638024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
I0317 12:52:49.389006  638024 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.389510  638024 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.389537  638024 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.389906  638024 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.390065  638024 main.go:141] libmachine: (functional-141794) Calling .DriverName
I0317 12:52:49.390307  638024 ssh_runner.go:195] Run: systemctl --version
I0317 12:52:49.390345  638024 main.go:141] libmachine: (functional-141794) Calling .GetSSHHostname
I0317 12:52:49.393195  638024 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.393654  638024 main.go:141] libmachine: (functional-141794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ae:0a", ip: ""} in network mk-functional-141794: {Iface:virbr1 ExpiryTime:2025-03-17 13:49:45 +0000 UTC Type:0 Mac:52:54:00:ed:ae:0a Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-141794 Clientid:01:52:54:00:ed:ae:0a}
I0317 12:52:49.393690  638024 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined IP address 192.168.39.221 and MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.393794  638024 main.go:141] libmachine: (functional-141794) Calling .GetSSHPort
I0317 12:52:49.393930  638024 main.go:141] libmachine: (functional-141794) Calling .GetSSHKeyPath
I0317 12:52:49.394032  638024 main.go:141] libmachine: (functional-141794) Calling .GetSSHUsername
I0317 12:52:49.394230  638024 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/functional-141794/id_rsa Username:docker}
I0317 12:52:49.560241  638024 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:52:49.636831  638024 main.go:141] libmachine: Making call to close driver server
I0317 12:52:49.636848  638024 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:49.637261  638024 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:49.637282  638024 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:49.637294  638024 main.go:141] libmachine: Making call to close driver server
I0317 12:52:49.637303  638024 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:49.639369  638024 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
I0317 12:52:49.639382  638024 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:49.639392  638024 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141794 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-141794  | 35b9c0eb4ed48 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 1ff4bb4faebcf | 49.3MB |
| docker.io/library/nginx                 | latest             | b52e0b094bc0e | 196MB  |
| localhost/kicbase/echo-server           | functional-141794  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141794 image ls --format table --alsologtostderr:
I0317 12:52:49.911921  638170 out.go:345] Setting OutFile to fd 1 ...
I0317 12:52:49.912043  638170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.912054  638170 out.go:358] Setting ErrFile to fd 2...
I0317 12:52:49.912060  638170 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.912286  638170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:52:49.912826  638170 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.912920  638170 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.913344  638170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.913392  638170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.931364  638170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
I0317 12:52:49.931837  638170 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.932394  638170 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.932433  638170 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.932867  638170 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.933052  638170 main.go:141] libmachine: (functional-141794) Calling .GetState
I0317 12:52:49.935114  638170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.935161  638170 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.949811  638170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
I0317 12:52:49.950348  638170 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.950914  638170 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.950933  638170 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.951229  638170 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.951457  638170 main.go:141] libmachine: (functional-141794) Calling .DriverName
I0317 12:52:49.951666  638170 ssh_runner.go:195] Run: systemctl --version
I0317 12:52:49.951695  638170 main.go:141] libmachine: (functional-141794) Calling .GetSSHHostname
I0317 12:52:49.955040  638170 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.955578  638170 main.go:141] libmachine: (functional-141794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ae:0a", ip: ""} in network mk-functional-141794: {Iface:virbr1 ExpiryTime:2025-03-17 13:49:45 +0000 UTC Type:0 Mac:52:54:00:ed:ae:0a Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-141794 Clientid:01:52:54:00:ed:ae:0a}
I0317 12:52:49.955619  638170 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined IP address 192.168.39.221 and MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.955740  638170 main.go:141] libmachine: (functional-141794) Calling .GetSSHPort
I0317 12:52:49.955919  638170 main.go:141] libmachine: (functional-141794) Calling .GetSSHKeyPath
I0317 12:52:49.956071  638170 main.go:141] libmachine: (functional-141794) Calling .GetSSHUsername
I0317 12:52:49.956224  638170 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/functional-141794/id_rsa Username:docker}
I0317 12:52:50.052340  638170 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:52:50.111846  638170 main.go:141] libmachine: Making call to close driver server
I0317 12:52:50.111861  638170 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:50.112157  638170 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:50.112174  638170 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:50.112192  638170 main.go:141] libmachine: Making call to close driver server
I0317 12:52:50.112200  638170 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:50.112427  638170 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:50.112441  638170 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141794 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5
b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b","repoDigests":["docker.io/library/nginx@sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44","docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496"],"repoTags":["docker.io/library/nginx:latest"],"size"
:"196159380"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-141794"],"size":"4943877"},{"id":"35b9c0eb4ed485de42e12fdf53390917e3d45d17418d28f79b6bd0bd04fd63f3","repoDigests":["localhost/minikube-local-cache-test@sha256:c5e6c837ba1be67874b3b9c94539cb4b18387d629a6bb388dd35dd50e0268035"],"repoTags":["localhost/minikube-local-cache-test:functional-141794"],"size":"3328"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647
b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08
dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0
927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49323988"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["regis
try.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141794 image ls --format json --alsologtostderr:
I0317 12:52:49.903923  638169 out.go:345] Setting OutFile to fd 1 ...
I0317 12:52:49.904197  638169 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.904212  638169 out.go:358] Setting ErrFile to fd 2...
I0317 12:52:49.904218  638169 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.904534  638169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:52:49.905339  638169 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.905505  638169 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.906120  638169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.906202  638169 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.925150  638169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
I0317 12:52:49.926036  638169 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.926675  638169 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.926694  638169 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.927173  638169 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.927423  638169 main.go:141] libmachine: (functional-141794) Calling .GetState
I0317 12:52:49.929878  638169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.929941  638169 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.947060  638169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
I0317 12:52:49.947584  638169 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.948162  638169 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.948194  638169 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.948579  638169 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.948802  638169 main.go:141] libmachine: (functional-141794) Calling .DriverName
I0317 12:52:49.949040  638169 ssh_runner.go:195] Run: systemctl --version
I0317 12:52:49.949084  638169 main.go:141] libmachine: (functional-141794) Calling .GetSSHHostname
I0317 12:52:49.952494  638169 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.953008  638169 main.go:141] libmachine: (functional-141794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ae:0a", ip: ""} in network mk-functional-141794: {Iface:virbr1 ExpiryTime:2025-03-17 13:49:45 +0000 UTC Type:0 Mac:52:54:00:ed:ae:0a Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-141794 Clientid:01:52:54:00:ed:ae:0a}
I0317 12:52:49.953119  638169 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined IP address 192.168.39.221 and MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.953458  638169 main.go:141] libmachine: (functional-141794) Calling .GetSSHPort
I0317 12:52:49.953642  638169 main.go:141] libmachine: (functional-141794) Calling .GetSSHKeyPath
I0317 12:52:49.953755  638169 main.go:141] libmachine: (functional-141794) Calling .GetSSHUsername
I0317 12:52:49.953878  638169 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/functional-141794/id_rsa Username:docker}
I0317 12:52:50.050710  638169 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:52:50.094693  638169 main.go:141] libmachine: Making call to close driver server
I0317 12:52:50.094715  638169 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:50.094989  638169 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:50.095004  638169 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:50.095029  638169 main.go:141] libmachine: Making call to close driver server
I0317 12:52:50.095038  638169 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:50.095347  638169 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:50.095367  638169 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:50.095392  638169 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141794 image ls --format yaml --alsologtostderr:
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc
repoTags:
- docker.io/library/nginx:alpine
size: "49323988"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-141794
size: "4943877"
- id: 35b9c0eb4ed485de42e12fdf53390917e3d45d17418d28f79b6bd0bd04fd63f3
repoDigests:
- localhost/minikube-local-cache-test@sha256:c5e6c837ba1be67874b3b9c94539cb4b18387d629a6bb388dd35dd50e0268035
repoTags:
- localhost/minikube-local-cache-test:functional-141794
size: "3328"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b
repoDigests:
- docker.io/library/nginx@sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44
- docker.io/library/nginx@sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141794 image ls --format yaml --alsologtostderr:
I0317 12:52:49.614665  638077 out.go:345] Setting OutFile to fd 1 ...
I0317 12:52:49.614971  638077 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.614989  638077 out.go:358] Setting ErrFile to fd 2...
I0317 12:52:49.614996  638077 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.615320  638077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:52:49.615989  638077 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.616105  638077 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.616531  638077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.616606  638077 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.633582  638077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
I0317 12:52:49.634251  638077 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.634908  638077 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.634937  638077 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.635338  638077 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.635598  638077 main.go:141] libmachine: (functional-141794) Calling .GetState
I0317 12:52:49.638159  638077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.638208  638077 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.656880  638077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
I0317 12:52:49.657430  638077 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.657936  638077 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.657962  638077 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.658387  638077 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.658619  638077 main.go:141] libmachine: (functional-141794) Calling .DriverName
I0317 12:52:49.658821  638077 ssh_runner.go:195] Run: systemctl --version
I0317 12:52:49.658852  638077 main.go:141] libmachine: (functional-141794) Calling .GetSSHHostname
I0317 12:52:49.662204  638077 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.662697  638077 main.go:141] libmachine: (functional-141794) Calling .GetSSHPort
I0317 12:52:49.662722  638077 main.go:141] libmachine: (functional-141794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ae:0a", ip: ""} in network mk-functional-141794: {Iface:virbr1 ExpiryTime:2025-03-17 13:49:45 +0000 UTC Type:0 Mac:52:54:00:ed:ae:0a Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-141794 Clientid:01:52:54:00:ed:ae:0a}
I0317 12:52:49.662752  638077 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined IP address 192.168.39.221 and MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.662857  638077 main.go:141] libmachine: (functional-141794) Calling .GetSSHKeyPath
I0317 12:52:49.663021  638077 main.go:141] libmachine: (functional-141794) Calling .GetSSHUsername
I0317 12:52:49.663155  638077 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/functional-141794/id_rsa Username:docker}
I0317 12:52:49.791161  638077 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:52:49.844878  638077 main.go:141] libmachine: Making call to close driver server
I0317 12:52:49.844893  638077 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:49.845237  638077 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:49.845260  638077 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:49.845270  638077 main.go:141] libmachine: Making call to close driver server
I0317 12:52:49.845273  638077 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
I0317 12:52:49.845279  638077 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:49.845495  638077 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:49.845522  638077 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:49.845622  638077 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh pgrep buildkitd: exit status 1 (230.89006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image build -t localhost/my-image:functional-141794 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 image build -t localhost/my-image:functional-141794 testdata/build --alsologtostderr: (3.690164345s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141794 image build -t localhost/my-image:functional-141794 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dc0c05a5a7f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-141794
--> 5aea94adf35
Successfully tagged localhost/my-image:functional-141794
5aea94adf3550249604877280e86dc2af387acb7d5e6a6fd02cf26b0efaa61e4
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141794 image build -t localhost/my-image:functional-141794 testdata/build --alsologtostderr:
I0317 12:52:49.933714  638187 out.go:345] Setting OutFile to fd 1 ...
I0317 12:52:49.933825  638187 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.933836  638187 out.go:358] Setting ErrFile to fd 2...
I0317 12:52:49.933843  638187 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:52:49.934100  638187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:52:49.934858  638187 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.935612  638187 config.go:182] Loaded profile config "functional-141794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:52:49.936003  638187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.936045  638187 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.952016  638187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
I0317 12:52:49.952446  638187 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.953232  638187 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.953270  638187 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.953670  638187 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.953912  638187 main.go:141] libmachine: (functional-141794) Calling .GetState
I0317 12:52:49.956439  638187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:52:49.956480  638187 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:52:49.972394  638187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
I0317 12:52:49.972915  638187 main.go:141] libmachine: () Calling .GetVersion
I0317 12:52:49.973459  638187 main.go:141] libmachine: Using API Version  1
I0317 12:52:49.973481  638187 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:52:49.973814  638187 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:52:49.974029  638187 main.go:141] libmachine: (functional-141794) Calling .DriverName
I0317 12:52:49.974247  638187 ssh_runner.go:195] Run: systemctl --version
I0317 12:52:49.974279  638187 main.go:141] libmachine: (functional-141794) Calling .GetSSHHostname
I0317 12:52:49.977270  638187 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.977672  638187 main.go:141] libmachine: (functional-141794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ae:0a", ip: ""} in network mk-functional-141794: {Iface:virbr1 ExpiryTime:2025-03-17 13:49:45 +0000 UTC Type:0 Mac:52:54:00:ed:ae:0a Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-141794 Clientid:01:52:54:00:ed:ae:0a}
I0317 12:52:49.977709  638187 main.go:141] libmachine: (functional-141794) DBG | domain functional-141794 has defined IP address 192.168.39.221 and MAC address 52:54:00:ed:ae:0a in network mk-functional-141794
I0317 12:52:49.977821  638187 main.go:141] libmachine: (functional-141794) Calling .GetSSHPort
I0317 12:52:49.978031  638187 main.go:141] libmachine: (functional-141794) Calling .GetSSHKeyPath
I0317 12:52:49.978153  638187 main.go:141] libmachine: (functional-141794) Calling .GetSSHUsername
I0317 12:52:49.978264  638187 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/functional-141794/id_rsa Username:docker}
I0317 12:52:50.071950  638187 build_images.go:161] Building image from path: /tmp/build.3095031057.tar
I0317 12:52:50.072079  638187 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0317 12:52:50.111223  638187 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3095031057.tar
I0317 12:52:50.116453  638187 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3095031057.tar: stat -c "%s %y" /var/lib/minikube/build/build.3095031057.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3095031057.tar': No such file or directory
I0317 12:52:50.116483  638187 ssh_runner.go:362] scp /tmp/build.3095031057.tar --> /var/lib/minikube/build/build.3095031057.tar (3072 bytes)
I0317 12:52:50.155654  638187 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3095031057
I0317 12:52:50.168627  638187 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3095031057 -xf /var/lib/minikube/build/build.3095031057.tar
I0317 12:52:50.180741  638187 crio.go:315] Building image: /var/lib/minikube/build/build.3095031057
I0317 12:52:50.180801  638187 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-141794 /var/lib/minikube/build/build.3095031057 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0317 12:52:53.541306  638187 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-141794 /var/lib/minikube/build/build.3095031057 --cgroup-manager=cgroupfs: (3.360459038s)
I0317 12:52:53.541391  638187 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3095031057
I0317 12:52:53.552180  638187 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3095031057.tar
I0317 12:52:53.561900  638187 build_images.go:217] Built localhost/my-image:functional-141794 from /tmp/build.3095031057.tar
I0317 12:52:53.561940  638187 build_images.go:133] succeeded building to: functional-141794
I0317 12:52:53.561945  638187 build_images.go:134] failed building to: 
I0317 12:52:53.561973  638187 main.go:141] libmachine: Making call to close driver server
I0317 12:52:53.561984  638187 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:53.562365  638187 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:53.562389  638187 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:53.562395  638187 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
I0317 12:52:53.562403  638187 main.go:141] libmachine: Making call to close driver server
I0317 12:52:53.562433  638187 main.go:141] libmachine: (functional-141794) Calling .Close
I0317 12:52:53.562707  638187 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:52:53.562725  638187 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:52:53.562744  638187 main.go:141] libmachine: (functional-141794) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
2025/03/17 12:53:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.579182422s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-141794
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 635468: os: process already finished
helpers_test.go:502: unable to terminate pid 635488: os: process already finished
helpers_test.go:502: unable to terminate pid 635523: os: process already finished
helpers_test.go:508: unable to kill pid 635438: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-141794 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0111de90-66a2-4a9e-99d9-d81806ab83eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0111de90-66a2-4a9e-99d9-d81806ab83eb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.036026043s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image load --daemon kicbase/echo-server:functional-141794 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 image load --daemon kicbase/echo-server:functional-141794 --alsologtostderr: (1.134819485s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image load --daemon kicbase/echo-server:functional-141794 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 image load --daemon kicbase/echo-server:functional-141794 --alsologtostderr: (1.813404517s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "295.321433ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "56.664425ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "300.714451ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.952643ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-141794
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image load --daemon kicbase/echo-server:functional-141794 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image save kicbase/echo-server:functional-141794 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image rm kicbase/echo-server:functional-141794 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 image rm kicbase/echo-server:functional-141794 --alsologtostderr: (1.433767585s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-141794 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.443079722s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-141794 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.137.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-141794 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-141794
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 image save --daemon kicbase/echo-server:functional-141794 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-141794
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-141794 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-141794 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-pzst4" [78e64e0a-cf48-46d5-a36f-84d683ad294d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-pzst4" [78e64e0a-cf48-46d5-a36f-84d683ad294d] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.00391049s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdany-port3872408579/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1742215957469842796" to /tmp/TestFunctionalparallelMountCmdany-port3872408579/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1742215957469842796" to /tmp/TestFunctionalparallelMountCmdany-port3872408579/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1742215957469842796" to /tmp/TestFunctionalparallelMountCmdany-port3872408579/001/test-1742215957469842796
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.961517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 12:52:37.679108  629188 retry.go:31] will retry after 279.998214ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 17 12:52 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 17 12:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 17 12:52 test-1742215957469842796
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh cat /mount-9p/test-1742215957469842796
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-141794 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1fa15437-5136-4aff-b8b7-2b476c0f854e] Pending
helpers_test.go:344: "busybox-mount" [1fa15437-5136-4aff-b8b7-2b476c0f854e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1fa15437-5136-4aff-b8b7-2b476c0f854e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1fa15437-5136-4aff-b8b7-2b476c0f854e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003994127s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-141794 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdany-port3872408579/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service list -o json
functional_test.go:1511: Took "854.985727ms" to run "out/minikube-linux-amd64 -p functional-141794 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.221:31132
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdspecific-port2591243995/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.118035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 12:52:47.229281  629188 retry.go:31] will retry after 580.478833ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdspecific-port2591243995/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141794 ssh "sudo umount -f /mount-9p": exit status 1 (228.680965ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-141794 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdspecific-port2591243995/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.221:31132
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141794 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-141794 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3037892542/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-141794
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-141794
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-141794
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168768 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0317 12:53:44.452081  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:54:12.159466  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-168768 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.332255985s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (191.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-168768 -- rollout status deployment/busybox: (5.98938045s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-4mrml -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-7tbdc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-qg6qz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-4mrml -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-7tbdc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-qg6qz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-4mrml -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-7tbdc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-qg6qz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-4mrml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-4mrml -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-7tbdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-7tbdc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-qg6qz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-168768 -- exec busybox-58667487b6-qg6qz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-168768 -v=7 --alsologtostderr
E0317 12:57:12.573448  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.579854  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.591210  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.612617  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.654046  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.735514  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:12.897114  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:13.219317  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:13.860967  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:15.142830  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:17.704873  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-168768 -v=7 --alsologtostderr: (56.084059667s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-168768 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp testdata/cp-test.txt ha-168768:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test.txt"
E0317 12:57:22.826652  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile235303870/001/cp-test_ha-168768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768:/home/docker/cp-test.txt ha-168768-m02:/home/docker/cp-test_ha-168768_ha-168768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test_ha-168768_ha-168768-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768:/home/docker/cp-test.txt ha-168768-m03:/home/docker/cp-test_ha-168768_ha-168768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test_ha-168768_ha-168768-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768:/home/docker/cp-test.txt ha-168768-m04:/home/docker/cp-test_ha-168768_ha-168768-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test_ha-168768_ha-168768-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp testdata/cp-test.txt ha-168768-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile235303870/001/cp-test_ha-168768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m02:/home/docker/cp-test.txt ha-168768:/home/docker/cp-test_ha-168768-m02_ha-168768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test_ha-168768-m02_ha-168768.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m02:/home/docker/cp-test.txt ha-168768-m03:/home/docker/cp-test_ha-168768-m02_ha-168768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test_ha-168768-m02_ha-168768-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m02:/home/docker/cp-test.txt ha-168768-m04:/home/docker/cp-test_ha-168768-m02_ha-168768-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test_ha-168768-m02_ha-168768-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp testdata/cp-test.txt ha-168768-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile235303870/001/cp-test_ha-168768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m03:/home/docker/cp-test.txt ha-168768:/home/docker/cp-test_ha-168768-m03_ha-168768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test_ha-168768-m03_ha-168768.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m03:/home/docker/cp-test.txt ha-168768-m02:/home/docker/cp-test_ha-168768-m03_ha-168768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test_ha-168768-m03_ha-168768-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m03:/home/docker/cp-test.txt ha-168768-m04:/home/docker/cp-test_ha-168768-m03_ha-168768-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test_ha-168768-m03_ha-168768-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp testdata/cp-test.txt ha-168768-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile235303870/001/cp-test_ha-168768-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m04:/home/docker/cp-test.txt ha-168768:/home/docker/cp-test_ha-168768-m04_ha-168768.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768 "sudo cat /home/docker/cp-test_ha-168768-m04_ha-168768.txt"
E0317 12:57:33.068195  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m04:/home/docker/cp-test.txt ha-168768-m02:/home/docker/cp-test_ha-168768-m04_ha-168768-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m02 "sudo cat /home/docker/cp-test_ha-168768-m04_ha-168768-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 cp ha-168768-m04:/home/docker/cp-test.txt ha-168768-m03:/home/docker/cp-test_ha-168768-m04_ha-168768-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 ssh -n ha-168768-m03 "sudo cat /home/docker/cp-test_ha-168768-m04_ha-168768-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 node stop m02 -v=7 --alsologtostderr
E0317 12:57:53.550499  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:58:34.512311  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:58:44.452565  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-168768 node stop m02 -v=7 --alsologtostderr: (1m30.984070721s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr: exit status 7 (621.620538ms)

                                                
                                                
-- stdout --
	ha-168768
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-168768-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168768-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-168768-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:59:05.666087  642871 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:59:05.666197  642871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:59:05.666208  642871 out.go:358] Setting ErrFile to fd 2...
	I0317 12:59:05.666213  642871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:59:05.666423  642871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 12:59:05.666636  642871 out.go:352] Setting JSON to false
	I0317 12:59:05.666675  642871 mustload.go:65] Loading cluster: ha-168768
	I0317 12:59:05.666724  642871 notify.go:220] Checking for updates...
	I0317 12:59:05.667137  642871 config.go:182] Loaded profile config "ha-168768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 12:59:05.667161  642871 status.go:174] checking status of ha-168768 ...
	I0317 12:59:05.667639  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.667682  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.684193  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I0317 12:59:05.684695  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.685301  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.685327  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.685798  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.686009  642871 main.go:141] libmachine: (ha-168768) Calling .GetState
	I0317 12:59:05.687594  642871 status.go:371] ha-168768 host status = "Running" (err=<nil>)
	I0317 12:59:05.687616  642871 host.go:66] Checking if "ha-168768" exists ...
	I0317 12:59:05.687937  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.687973  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.703230  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36651
	I0317 12:59:05.703748  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.704186  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.704216  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.704621  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.704798  642871 main.go:141] libmachine: (ha-168768) Calling .GetIP
	I0317 12:59:05.707800  642871 main.go:141] libmachine: (ha-168768) DBG | domain ha-168768 has defined MAC address 52:54:00:81:74:93 in network mk-ha-168768
	I0317 12:59:05.708304  642871 main.go:141] libmachine: (ha-168768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:74:93", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:53:17 +0000 UTC Type:0 Mac:52:54:00:81:74:93 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-168768 Clientid:01:52:54:00:81:74:93}
	I0317 12:59:05.708333  642871 main.go:141] libmachine: (ha-168768) DBG | domain ha-168768 has defined IP address 192.168.39.60 and MAC address 52:54:00:81:74:93 in network mk-ha-168768
	I0317 12:59:05.708451  642871 host.go:66] Checking if "ha-168768" exists ...
	I0317 12:59:05.708806  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.708850  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.724859  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44907
	I0317 12:59:05.725648  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.726120  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.726146  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.726576  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.726793  642871 main.go:141] libmachine: (ha-168768) Calling .DriverName
	I0317 12:59:05.726960  642871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:59:05.727005  642871 main.go:141] libmachine: (ha-168768) Calling .GetSSHHostname
	I0317 12:59:05.729772  642871 main.go:141] libmachine: (ha-168768) DBG | domain ha-168768 has defined MAC address 52:54:00:81:74:93 in network mk-ha-168768
	I0317 12:59:05.730185  642871 main.go:141] libmachine: (ha-168768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:74:93", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:53:17 +0000 UTC Type:0 Mac:52:54:00:81:74:93 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:ha-168768 Clientid:01:52:54:00:81:74:93}
	I0317 12:59:05.730220  642871 main.go:141] libmachine: (ha-168768) DBG | domain ha-168768 has defined IP address 192.168.39.60 and MAC address 52:54:00:81:74:93 in network mk-ha-168768
	I0317 12:59:05.730345  642871 main.go:141] libmachine: (ha-168768) Calling .GetSSHPort
	I0317 12:59:05.730515  642871 main.go:141] libmachine: (ha-168768) Calling .GetSSHKeyPath
	I0317 12:59:05.730658  642871 main.go:141] libmachine: (ha-168768) Calling .GetSSHUsername
	I0317 12:59:05.730799  642871 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/ha-168768/id_rsa Username:docker}
	I0317 12:59:05.818238  642871 ssh_runner.go:195] Run: systemctl --version
	I0317 12:59:05.824458  642871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:59:05.837735  642871 kubeconfig.go:125] found "ha-168768" server: "https://192.168.39.254:8443"
	I0317 12:59:05.837770  642871 api_server.go:166] Checking apiserver status ...
	I0317 12:59:05.837807  642871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:59:05.850267  642871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup
	W0317 12:59:05.860483  642871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0317 12:59:05.860524  642871 ssh_runner.go:195] Run: ls
	I0317 12:59:05.865303  642871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0317 12:59:05.869178  642871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0317 12:59:05.869200  642871 status.go:463] ha-168768 apiserver status = Running (err=<nil>)
	I0317 12:59:05.869212  642871 status.go:176] ha-168768 status: &{Name:ha-168768 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:59:05.869235  642871 status.go:174] checking status of ha-168768-m02 ...
	I0317 12:59:05.869558  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.869585  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.886164  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0317 12:59:05.886593  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.887030  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.887053  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.887418  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.887622  642871 main.go:141] libmachine: (ha-168768-m02) Calling .GetState
	I0317 12:59:05.889337  642871 status.go:371] ha-168768-m02 host status = "Stopped" (err=<nil>)
	I0317 12:59:05.889352  642871 status.go:384] host is not running, skipping remaining checks
	I0317 12:59:05.889357  642871 status.go:176] ha-168768-m02 status: &{Name:ha-168768-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:59:05.889374  642871 status.go:174] checking status of ha-168768-m03 ...
	I0317 12:59:05.889652  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.889698  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.905037  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I0317 12:59:05.905485  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.905929  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.905949  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.906273  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.906463  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetState
	I0317 12:59:05.907910  642871 status.go:371] ha-168768-m03 host status = "Running" (err=<nil>)
	I0317 12:59:05.907925  642871 host.go:66] Checking if "ha-168768-m03" exists ...
	I0317 12:59:05.908187  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.908221  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.923415  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0317 12:59:05.923840  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.924295  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.924319  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.924668  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.924836  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetIP
	I0317 12:59:05.927986  642871 main.go:141] libmachine: (ha-168768-m03) DBG | domain ha-168768-m03 has defined MAC address 52:54:00:4e:9e:8d in network mk-ha-168768
	I0317 12:59:05.928448  642871 main.go:141] libmachine: (ha-168768-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:9e:8d", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:55:13 +0000 UTC Type:0 Mac:52:54:00:4e:9e:8d Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-168768-m03 Clientid:01:52:54:00:4e:9e:8d}
	I0317 12:59:05.928475  642871 main.go:141] libmachine: (ha-168768-m03) DBG | domain ha-168768-m03 has defined IP address 192.168.39.33 and MAC address 52:54:00:4e:9e:8d in network mk-ha-168768
	I0317 12:59:05.928618  642871 host.go:66] Checking if "ha-168768-m03" exists ...
	I0317 12:59:05.928915  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:05.928960  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:05.944531  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
	I0317 12:59:05.944955  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:05.945364  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:05.945391  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:05.945732  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:05.945925  642871 main.go:141] libmachine: (ha-168768-m03) Calling .DriverName
	I0317 12:59:05.946120  642871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:59:05.946143  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetSSHHostname
	I0317 12:59:05.948967  642871 main.go:141] libmachine: (ha-168768-m03) DBG | domain ha-168768-m03 has defined MAC address 52:54:00:4e:9e:8d in network mk-ha-168768
	I0317 12:59:05.949404  642871 main.go:141] libmachine: (ha-168768-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:9e:8d", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:55:13 +0000 UTC Type:0 Mac:52:54:00:4e:9e:8d Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ha-168768-m03 Clientid:01:52:54:00:4e:9e:8d}
	I0317 12:59:05.949438  642871 main.go:141] libmachine: (ha-168768-m03) DBG | domain ha-168768-m03 has defined IP address 192.168.39.33 and MAC address 52:54:00:4e:9e:8d in network mk-ha-168768
	I0317 12:59:05.949619  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetSSHPort
	I0317 12:59:05.949815  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetSSHKeyPath
	I0317 12:59:05.949961  642871 main.go:141] libmachine: (ha-168768-m03) Calling .GetSSHUsername
	I0317 12:59:05.950102  642871 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/ha-168768-m03/id_rsa Username:docker}
	I0317 12:59:06.034573  642871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:59:06.050234  642871 kubeconfig.go:125] found "ha-168768" server: "https://192.168.39.254:8443"
	I0317 12:59:06.050265  642871 api_server.go:166] Checking apiserver status ...
	I0317 12:59:06.050306  642871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:59:06.063373  642871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0317 12:59:06.071977  642871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0317 12:59:06.072025  642871 ssh_runner.go:195] Run: ls
	I0317 12:59:06.075995  642871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0317 12:59:06.080948  642871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0317 12:59:06.080969  642871 status.go:463] ha-168768-m03 apiserver status = Running (err=<nil>)
	I0317 12:59:06.080978  642871 status.go:176] ha-168768-m03 status: &{Name:ha-168768-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:59:06.080996  642871 status.go:174] checking status of ha-168768-m04 ...
	I0317 12:59:06.081297  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:06.081342  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:06.096654  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40699
	I0317 12:59:06.097112  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:06.097524  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:06.097547  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:06.097878  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:06.098078  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetState
	I0317 12:59:06.099463  642871 status.go:371] ha-168768-m04 host status = "Running" (err=<nil>)
	I0317 12:59:06.099481  642871 host.go:66] Checking if "ha-168768-m04" exists ...
	I0317 12:59:06.099800  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:06.099844  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:06.114806  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0317 12:59:06.115208  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:06.115598  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:06.115618  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:06.115994  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:06.116151  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetIP
	I0317 12:59:06.118912  642871 main.go:141] libmachine: (ha-168768-m04) DBG | domain ha-168768-m04 has defined MAC address 52:54:00:76:64:84 in network mk-ha-168768
	I0317 12:59:06.119295  642871 main.go:141] libmachine: (ha-168768-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:64:84", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:56:38 +0000 UTC Type:0 Mac:52:54:00:76:64:84 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-168768-m04 Clientid:01:52:54:00:76:64:84}
	I0317 12:59:06.119316  642871 main.go:141] libmachine: (ha-168768-m04) DBG | domain ha-168768-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:76:64:84 in network mk-ha-168768
	I0317 12:59:06.119465  642871 host.go:66] Checking if "ha-168768-m04" exists ...
	I0317 12:59:06.119814  642871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 12:59:06.119861  642871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 12:59:06.134754  642871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0317 12:59:06.135125  642871 main.go:141] libmachine: () Calling .GetVersion
	I0317 12:59:06.135598  642871 main.go:141] libmachine: Using API Version  1
	I0317 12:59:06.135622  642871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 12:59:06.135954  642871 main.go:141] libmachine: () Calling .GetMachineName
	I0317 12:59:06.136149  642871 main.go:141] libmachine: (ha-168768-m04) Calling .DriverName
	I0317 12:59:06.136355  642871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:59:06.136377  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetSSHHostname
	I0317 12:59:06.139040  642871 main.go:141] libmachine: (ha-168768-m04) DBG | domain ha-168768-m04 has defined MAC address 52:54:00:76:64:84 in network mk-ha-168768
	I0317 12:59:06.139427  642871 main.go:141] libmachine: (ha-168768-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:64:84", ip: ""} in network mk-ha-168768: {Iface:virbr1 ExpiryTime:2025-03-17 13:56:38 +0000 UTC Type:0 Mac:52:54:00:76:64:84 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-168768-m04 Clientid:01:52:54:00:76:64:84}
	I0317 12:59:06.139464  642871 main.go:141] libmachine: (ha-168768-m04) DBG | domain ha-168768-m04 has defined IP address 192.168.39.230 and MAC address 52:54:00:76:64:84 in network mk-ha-168768
	I0317 12:59:06.139586  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetSSHPort
	I0317 12:59:06.139741  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetSSHKeyPath
	I0317 12:59:06.139891  642871 main.go:141] libmachine: (ha-168768-m04) Calling .GetSSHUsername
	I0317 12:59:06.139996  642871 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/ha-168768-m04/id_rsa Username:docker}
	I0317 12:59:06.222652  642871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:59:06.238191  642871 status.go:176] ha-168768-m04 status: &{Name:ha-168768-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-168768 node start m02 -v=7 --alsologtostderr: (43.807624459s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-168768 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-168768 -v=7 --alsologtostderr
E0317 12:59:56.434354  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:02:12.573777  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:02:40.276537  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:03:44.452389  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-168768 -v=7 --alsologtostderr: (4m34.146225888s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168768 --wait=true -v=7 --alsologtostderr
E0317 13:05:07.521458  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-168768 --wait=true -v=7 --alsologtostderr: (2m34.458795139s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-168768
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 node delete m03 -v=7 --alsologtostderr
E0317 13:07:12.579949  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-168768 node delete m03 -v=7 --alsologtostderr: (17.272396083s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 stop -v=7 --alsologtostderr
E0317 13:08:44.452836  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-168768 stop -v=7 --alsologtostderr: (4m32.796861221s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr: exit status 7 (115.229565ms)

                                                
                                                
-- stdout --
	ha-168768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168768-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-168768-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:11:52.599928  647026 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:11:52.600056  647026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:11:52.600069  647026 out.go:358] Setting ErrFile to fd 2...
	I0317 13:11:52.600077  647026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:11:52.600301  647026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:11:52.600459  647026 out.go:352] Setting JSON to false
	I0317 13:11:52.600491  647026 mustload.go:65] Loading cluster: ha-168768
	I0317 13:11:52.600582  647026 notify.go:220] Checking for updates...
	I0317 13:11:52.600840  647026 config.go:182] Loaded profile config "ha-168768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:11:52.600862  647026 status.go:174] checking status of ha-168768 ...
	I0317 13:11:52.601325  647026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:11:52.601372  647026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:11:52.625930  647026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0317 13:11:52.626375  647026 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:11:52.626884  647026 main.go:141] libmachine: Using API Version  1
	I0317 13:11:52.626911  647026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:11:52.627335  647026 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:11:52.627563  647026 main.go:141] libmachine: (ha-168768) Calling .GetState
	I0317 13:11:52.629025  647026 status.go:371] ha-168768 host status = "Stopped" (err=<nil>)
	I0317 13:11:52.629046  647026 status.go:384] host is not running, skipping remaining checks
	I0317 13:11:52.629053  647026 status.go:176] ha-168768 status: &{Name:ha-168768 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:11:52.629079  647026 status.go:174] checking status of ha-168768-m02 ...
	I0317 13:11:52.629375  647026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:11:52.629418  647026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:11:52.644677  647026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0317 13:11:52.645186  647026 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:11:52.645741  647026 main.go:141] libmachine: Using API Version  1
	I0317 13:11:52.645779  647026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:11:52.646186  647026 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:11:52.646401  647026 main.go:141] libmachine: (ha-168768-m02) Calling .GetState
	I0317 13:11:52.647968  647026 status.go:371] ha-168768-m02 host status = "Stopped" (err=<nil>)
	I0317 13:11:52.647985  647026 status.go:384] host is not running, skipping remaining checks
	I0317 13:11:52.647991  647026 status.go:176] ha-168768-m02 status: &{Name:ha-168768-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:11:52.648016  647026 status.go:174] checking status of ha-168768-m04 ...
	I0317 13:11:52.648350  647026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:11:52.648398  647026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:11:52.663774  647026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0317 13:11:52.664172  647026 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:11:52.664611  647026 main.go:141] libmachine: Using API Version  1
	I0317 13:11:52.664635  647026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:11:52.664963  647026 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:11:52.665154  647026 main.go:141] libmachine: (ha-168768-m04) Calling .GetState
	I0317 13:11:52.666738  647026 status.go:371] ha-168768-m04 host status = "Stopped" (err=<nil>)
	I0317 13:11:52.666750  647026 status.go:384] host is not running, skipping remaining checks
	I0317 13:11:52.666770  647026 status.go:176] ha-168768-m04 status: &{Name:ha-168768-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-168768 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0317 13:12:12.573432  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:13:35.637980  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:13:44.452602  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-168768 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.26015625s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-168768 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-168768 --control-plane -v=7 --alsologtostderr: (1m13.784279139s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-168768 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-990861 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-990861 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.820278176s)
--- PASS: TestJSONOutput/start/Command (55.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-990861 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-990861 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-990861 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-990861 --output=json --user=testUser: (7.352076219s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-551839 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-551839 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.574585ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad43dcb1-6518-4bc7-bfba-77432fd25dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-551839] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"89f616c3-7925-4f3e-a64a-3990c22ee775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20539"}}
	{"specversion":"1.0","id":"913c3dcc-0c7c-4fe6-bfc2-953ce018b792","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bb7126a7-4436-425a-9a50-7b51e8e2b772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig"}}
	{"specversion":"1.0","id":"bc1cb90d-f1d3-4f82-a456-43bc68d0e9c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube"}}
	{"specversion":"1.0","id":"e82a48ff-9108-4574-afd6-eeb1884f8124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"736c2cfd-ecd7-43a8-b137-6354050e4ff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"961b5ed4-2e0e-47ce-938a-904090135339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-551839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-551839
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-431091 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-431091 --driver=kvm2  --container-runtime=crio: (41.821271215s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-444751 --driver=kvm2  --container-runtime=crio
E0317 13:17:12.572923  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-444751 --driver=kvm2  --container-runtime=crio: (40.062272747s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-431091
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-444751
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-444751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-444751
helpers_test.go:175: Cleaning up "first-431091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-431091
--- PASS: TestMinikubeProfile (84.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-392359 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-392359 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.907322362s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-392359 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-392359 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-411908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-411908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.905750329s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-392359 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-411908
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-411908: (1.278671132s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-411908
E0317 13:18:44.453168  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-411908: (21.304287564s)
--- PASS: TestMountStart/serial/RestartStopped (22.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-411908 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-913463 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-913463 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.912699239s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-913463 -- rollout status deployment/busybox: (4.519894408s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-bhbgj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-qbdfh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-bhbgj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-qbdfh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-bhbgj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-qbdfh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-bhbgj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-bhbgj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-qbdfh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-913463 -- exec busybox-58667487b6-qbdfh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-913463 -v 3 --alsologtostderr
E0317 13:21:47.523824  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-913463 -v 3 --alsologtostderr: (49.53655469s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-913463 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp testdata/cp-test.txt multinode-913463:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3185072967/001/cp-test_multinode-913463.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463:/home/docker/cp-test.txt multinode-913463-m02:/home/docker/cp-test_multinode-913463_multinode-913463-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test_multinode-913463_multinode-913463-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463:/home/docker/cp-test.txt multinode-913463-m03:/home/docker/cp-test_multinode-913463_multinode-913463-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test_multinode-913463_multinode-913463-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp testdata/cp-test.txt multinode-913463-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3185072967/001/cp-test_multinode-913463-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m02:/home/docker/cp-test.txt multinode-913463:/home/docker/cp-test_multinode-913463-m02_multinode-913463.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test_multinode-913463-m02_multinode-913463.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m02:/home/docker/cp-test.txt multinode-913463-m03:/home/docker/cp-test_multinode-913463-m02_multinode-913463-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test_multinode-913463-m02_multinode-913463-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp testdata/cp-test.txt multinode-913463-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3185072967/001/cp-test_multinode-913463-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m03:/home/docker/cp-test.txt multinode-913463:/home/docker/cp-test_multinode-913463-m03_multinode-913463.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463 "sudo cat /home/docker/cp-test_multinode-913463-m03_multinode-913463.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 cp multinode-913463-m03:/home/docker/cp-test.txt multinode-913463-m02:/home/docker/cp-test_multinode-913463-m03_multinode-913463-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 ssh -n multinode-913463-m02 "sudo cat /home/docker/cp-test_multinode-913463-m03_multinode-913463-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-913463 node stop m03: (1.377118955s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-913463 status: exit status 7 (430.78697ms)

                                                
                                                
-- stdout --
	multinode-913463
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-913463-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-913463-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr: exit status 7 (432.512083ms)

                                                
                                                
-- stdout --
	multinode-913463
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-913463-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-913463-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:21:58.691670  655237 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:21:58.691795  655237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:21:58.691803  655237 out.go:358] Setting ErrFile to fd 2...
	I0317 13:21:58.691806  655237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:21:58.692026  655237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:21:58.692190  655237 out.go:352] Setting JSON to false
	I0317 13:21:58.692225  655237 mustload.go:65] Loading cluster: multinode-913463
	I0317 13:21:58.692359  655237 notify.go:220] Checking for updates...
	I0317 13:21:58.692653  655237 config.go:182] Loaded profile config "multinode-913463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:21:58.692678  655237 status.go:174] checking status of multinode-913463 ...
	I0317 13:21:58.693208  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.693251  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.709516  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44891
	I0317 13:21:58.710086  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.710752  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.710780  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.711111  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.711314  655237 main.go:141] libmachine: (multinode-913463) Calling .GetState
	I0317 13:21:58.712886  655237 status.go:371] multinode-913463 host status = "Running" (err=<nil>)
	I0317 13:21:58.712910  655237 host.go:66] Checking if "multinode-913463" exists ...
	I0317 13:21:58.713296  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.713350  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.729316  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41945
	I0317 13:21:58.729868  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.730355  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.730378  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.730772  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.730992  655237 main.go:141] libmachine: (multinode-913463) Calling .GetIP
	I0317 13:21:58.734481  655237 main.go:141] libmachine: (multinode-913463) DBG | domain multinode-913463 has defined MAC address 52:54:00:a4:d2:9a in network mk-multinode-913463
	I0317 13:21:58.735038  655237 main.go:141] libmachine: (multinode-913463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:d2:9a", ip: ""} in network mk-multinode-913463: {Iface:virbr1 ExpiryTime:2025-03-17 14:19:17 +0000 UTC Type:0 Mac:52:54:00:a4:d2:9a Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-913463 Clientid:01:52:54:00:a4:d2:9a}
	I0317 13:21:58.735075  655237 main.go:141] libmachine: (multinode-913463) DBG | domain multinode-913463 has defined IP address 192.168.39.248 and MAC address 52:54:00:a4:d2:9a in network mk-multinode-913463
	I0317 13:21:58.735254  655237 host.go:66] Checking if "multinode-913463" exists ...
	I0317 13:21:58.735634  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.735689  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.753214  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0317 13:21:58.753662  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.754123  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.754147  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.754547  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.754749  655237 main.go:141] libmachine: (multinode-913463) Calling .DriverName
	I0317 13:21:58.754990  655237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:21:58.755019  655237 main.go:141] libmachine: (multinode-913463) Calling .GetSSHHostname
	I0317 13:21:58.757900  655237 main.go:141] libmachine: (multinode-913463) DBG | domain multinode-913463 has defined MAC address 52:54:00:a4:d2:9a in network mk-multinode-913463
	I0317 13:21:58.758412  655237 main.go:141] libmachine: (multinode-913463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:d2:9a", ip: ""} in network mk-multinode-913463: {Iface:virbr1 ExpiryTime:2025-03-17 14:19:17 +0000 UTC Type:0 Mac:52:54:00:a4:d2:9a Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-913463 Clientid:01:52:54:00:a4:d2:9a}
	I0317 13:21:58.758459  655237 main.go:141] libmachine: (multinode-913463) DBG | domain multinode-913463 has defined IP address 192.168.39.248 and MAC address 52:54:00:a4:d2:9a in network mk-multinode-913463
	I0317 13:21:58.758592  655237 main.go:141] libmachine: (multinode-913463) Calling .GetSSHPort
	I0317 13:21:58.758757  655237 main.go:141] libmachine: (multinode-913463) Calling .GetSSHKeyPath
	I0317 13:21:58.758937  655237 main.go:141] libmachine: (multinode-913463) Calling .GetSSHUsername
	I0317 13:21:58.759078  655237 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/multinode-913463/id_rsa Username:docker}
	I0317 13:21:58.839218  655237 ssh_runner.go:195] Run: systemctl --version
	I0317 13:21:58.845875  655237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:21:58.860975  655237 kubeconfig.go:125] found "multinode-913463" server: "https://192.168.39.248:8443"
	I0317 13:21:58.861031  655237 api_server.go:166] Checking apiserver status ...
	I0317 13:21:58.861075  655237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:21:58.874493  655237 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup
	W0317 13:21:58.883833  655237 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:21:58.883906  655237 ssh_runner.go:195] Run: ls
	I0317 13:21:58.888143  655237 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I0317 13:21:58.893510  655237 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I0317 13:21:58.893552  655237 status.go:463] multinode-913463 apiserver status = Running (err=<nil>)
	I0317 13:21:58.893567  655237 status.go:176] multinode-913463 status: &{Name:multinode-913463 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:21:58.893591  655237 status.go:174] checking status of multinode-913463-m02 ...
	I0317 13:21:58.894000  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.894054  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.911419  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0317 13:21:58.912007  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.912540  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.912565  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.912918  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.913149  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetState
	I0317 13:21:58.914576  655237 status.go:371] multinode-913463-m02 host status = "Running" (err=<nil>)
	I0317 13:21:58.914594  655237 host.go:66] Checking if "multinode-913463-m02" exists ...
	I0317 13:21:58.914886  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.914944  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.930986  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0317 13:21:58.931514  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.931962  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.931985  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.932349  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.932527  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetIP
	I0317 13:21:58.935348  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | domain multinode-913463-m02 has defined MAC address 52:54:00:78:d1:0d in network mk-multinode-913463
	I0317 13:21:58.935831  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d1:0d", ip: ""} in network mk-multinode-913463: {Iface:virbr1 ExpiryTime:2025-03-17 14:20:19 +0000 UTC Type:0 Mac:52:54:00:78:d1:0d Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-913463-m02 Clientid:01:52:54:00:78:d1:0d}
	I0317 13:21:58.935866  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | domain multinode-913463-m02 has defined IP address 192.168.39.28 and MAC address 52:54:00:78:d1:0d in network mk-multinode-913463
	I0317 13:21:58.935982  655237 host.go:66] Checking if "multinode-913463-m02" exists ...
	I0317 13:21:58.936302  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:58.936352  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:58.952244  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0317 13:21:58.952778  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:58.953251  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:58.953271  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:58.953582  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:58.953773  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .DriverName
	I0317 13:21:58.953965  655237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:21:58.953988  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetSSHHostname
	I0317 13:21:58.956775  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | domain multinode-913463-m02 has defined MAC address 52:54:00:78:d1:0d in network mk-multinode-913463
	I0317 13:21:58.957213  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d1:0d", ip: ""} in network mk-multinode-913463: {Iface:virbr1 ExpiryTime:2025-03-17 14:20:19 +0000 UTC Type:0 Mac:52:54:00:78:d1:0d Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-913463-m02 Clientid:01:52:54:00:78:d1:0d}
	I0317 13:21:58.957253  655237 main.go:141] libmachine: (multinode-913463-m02) DBG | domain multinode-913463-m02 has defined IP address 192.168.39.28 and MAC address 52:54:00:78:d1:0d in network mk-multinode-913463
	I0317 13:21:58.957485  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetSSHPort
	I0317 13:21:58.957653  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetSSHKeyPath
	I0317 13:21:58.957793  655237 main.go:141] libmachine: (multinode-913463-m02) Calling .GetSSHUsername
	I0317 13:21:58.957935  655237 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/multinode-913463-m02/id_rsa Username:docker}
	I0317 13:21:59.038751  655237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:21:59.052971  655237 status.go:176] multinode-913463-m02 status: &{Name:multinode-913463-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:21:59.053018  655237 status.go:174] checking status of multinode-913463-m03 ...
	I0317 13:21:59.053364  655237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:21:59.053420  655237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:21:59.070148  655237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0317 13:21:59.070711  655237 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:21:59.071167  655237 main.go:141] libmachine: Using API Version  1
	I0317 13:21:59.071193  655237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:21:59.071659  655237 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:21:59.071928  655237 main.go:141] libmachine: (multinode-913463-m03) Calling .GetState
	I0317 13:21:59.074020  655237 status.go:371] multinode-913463-m03 host status = "Stopped" (err=<nil>)
	I0317 13:21:59.074036  655237 status.go:384] host is not running, skipping remaining checks
	I0317 13:21:59.074042  655237 status.go:176] multinode-913463-m03 status: &{Name:multinode-913463-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (84.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 node start m03 -v=7 --alsologtostderr
E0317 13:22:12.573223  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-913463 node start m03 -v=7 --alsologtostderr: (1m23.80610877s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (84.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (340.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-913463
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-913463
E0317 13:23:44.452826  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-913463: (3m3.013106263s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-913463 --wait=true -v=8 --alsologtostderr
E0317 13:27:12.573040  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:28:44.452874  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-913463 --wait=true -v=8 --alsologtostderr: (2m37.731187673s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-913463
--- PASS: TestMultiNode/serial/RestartKeepsNodes (340.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-913463 node delete m03: (2.086821095s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 stop
E0317 13:30:15.639946  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-913463 stop: (3m1.861155087s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-913463 status: exit status 7 (93.113902ms)

                                                
                                                
-- stdout --
	multinode-913463
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-913463-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr: exit status 7 (90.925856ms)

                                                
                                                
-- stdout --
	multinode-913463
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-913463-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:32:08.957344  658426 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:32:08.957604  658426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:32:08.957613  658426 out.go:358] Setting ErrFile to fd 2...
	I0317 13:32:08.957617  658426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:32:08.957829  658426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:32:08.957979  658426 out.go:352] Setting JSON to false
	I0317 13:32:08.958009  658426 mustload.go:65] Loading cluster: multinode-913463
	I0317 13:32:08.958132  658426 notify.go:220] Checking for updates...
	I0317 13:32:08.958394  658426 config.go:182] Loaded profile config "multinode-913463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:32:08.958415  658426 status.go:174] checking status of multinode-913463 ...
	I0317 13:32:08.958832  658426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:32:08.958879  658426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:32:08.978143  658426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0317 13:32:08.978593  658426 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:32:08.979053  658426 main.go:141] libmachine: Using API Version  1
	I0317 13:32:08.979074  658426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:32:08.979516  658426 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:32:08.979742  658426 main.go:141] libmachine: (multinode-913463) Calling .GetState
	I0317 13:32:08.981430  658426 status.go:371] multinode-913463 host status = "Stopped" (err=<nil>)
	I0317 13:32:08.981451  658426 status.go:384] host is not running, skipping remaining checks
	I0317 13:32:08.981460  658426 status.go:176] multinode-913463 status: &{Name:multinode-913463 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:32:08.981500  658426 status.go:174] checking status of multinode-913463-m02 ...
	I0317 13:32:08.981810  658426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0317 13:32:08.981851  658426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0317 13:32:08.997121  658426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0317 13:32:08.997535  658426 main.go:141] libmachine: () Calling .GetVersion
	I0317 13:32:08.998011  658426 main.go:141] libmachine: Using API Version  1
	I0317 13:32:08.998035  658426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0317 13:32:08.998396  658426 main.go:141] libmachine: () Calling .GetMachineName
	I0317 13:32:08.998570  658426 main.go:141] libmachine: (multinode-913463-m02) Calling .GetState
	I0317 13:32:09.000278  658426 status.go:371] multinode-913463-m02 host status = "Stopped" (err=<nil>)
	I0317 13:32:09.000290  658426 status.go:384] host is not running, skipping remaining checks
	I0317 13:32:09.000296  658426 status.go:176] multinode-913463-m02 status: &{Name:multinode-913463-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-913463 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0317 13:32:12.573300  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:33:44.452155  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-913463 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.423159136s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-913463 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-913463
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-913463-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-913463-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.639823ms)

                                                
                                                
-- stdout --
	* [multinode-913463-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-913463-m02' is duplicated with machine name 'multinode-913463-m02' in profile 'multinode-913463'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-913463-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-913463-m03 --driver=kvm2  --container-runtime=crio: (38.64406085s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-913463
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-913463: exit status 80 (213.642307ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-913463 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-913463-m03 already exists in multinode-913463-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-913463-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.78s)

                                                
                                    
x
+
TestScheduledStopUnix (114.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-306043 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-306043 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.355049892s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306043 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-306043 -n scheduled-stop-306043
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306043 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0317 13:38:21.299281  629188 retry.go:31] will retry after 142.117µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.300471  629188 retry.go:31] will retry after 99.181µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.301597  629188 retry.go:31] will retry after 329.8µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.302730  629188 retry.go:31] will retry after 296.191µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.303847  629188 retry.go:31] will retry after 674.956µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.304998  629188 retry.go:31] will retry after 587.075µs: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.306145  629188 retry.go:31] will retry after 1.045657ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.307294  629188 retry.go:31] will retry after 1.081714ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.308437  629188 retry.go:31] will retry after 1.601219ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.310684  629188 retry.go:31] will retry after 5.552319ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.316906  629188 retry.go:31] will retry after 3.557289ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.321109  629188 retry.go:31] will retry after 9.936166ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.331360  629188 retry.go:31] will retry after 15.856885ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.347607  629188 retry.go:31] will retry after 16.02053ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.363893  629188 retry.go:31] will retry after 17.25836ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
I0317 13:38:21.382233  629188 retry.go:31] will retry after 48.520055ms: open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/scheduled-stop-306043/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306043 --cancel-scheduled
E0317 13:38:27.527959  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:38:44.453745  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306043 -n scheduled-stop-306043
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-306043
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306043 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-306043
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-306043: exit status 7 (65.88951ms)

                                                
                                                
-- stdout --
	scheduled-stop-306043
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306043 -n scheduled-stop-306043
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306043 -n scheduled-stop-306043: exit status 7 (65.804454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-306043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-306043
--- PASS: TestScheduledStopUnix (114.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2205872745 start -p running-upgrade-272813 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2205872745 start -p running-upgrade-272813 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.25287124s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-272813 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-272813 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.518787816s)
helpers_test.go:175: Cleaning up "running-upgrade-272813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-272813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-272813: (1.216428755s)
--- PASS: TestRunningBinaryUpgrade (191.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (89.731017ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-234912] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-234912 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-234912 --driver=kvm2  --container-runtime=crio: (1m37.104911274s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-234912 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (145.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3313414647 start -p stopped-upgrade-811360 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3313414647 start -p stopped-upgrade-811360 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m33.395485057s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3313414647 -p stopped-upgrade-811360 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3313414647 -p stopped-upgrade-811360 stop: (1.410955326s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-811360 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0317 13:42:12.573656  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-811360 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.434041965s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (145.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.797347937s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-234912 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-234912 status -o json: exit status 2 (265.504556ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-234912","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-234912
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-234912: (1.401648146s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-234912 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.039503417s)
--- PASS: TestNoKubernetes/serial/Start (29.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-234912 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-234912 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.974564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.370556529s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-234912
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-234912: (1.29242587s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-234912 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-234912 --driver=kvm2  --container-runtime=crio: (20.671214575s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.67s)

                                                
                                    
x
+
TestPause/serial/Start (60.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-880805 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-880805 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m0.37095116s)
--- PASS: TestPause/serial/Start (60.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-234912 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-234912 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.597226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-811360
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-788750 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-788750 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.780747ms)

                                                
                                                
-- stdout --
	* [false-788750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:43:03.064874  665948 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:43:03.065300  665948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:43:03.065358  665948 out.go:358] Setting ErrFile to fd 2...
	I0317 13:43:03.065376  665948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:43:03.065850  665948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
	I0317 13:43:03.066870  665948 out.go:352] Setting JSON to false
	I0317 13:43:03.067994  665948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12327,"bootTime":1742206656,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:43:03.068094  665948 start.go:139] virtualization: kvm guest
	I0317 13:43:03.069948  665948 out.go:177] * [false-788750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:43:03.071582  665948 notify.go:220] Checking for updates...
	I0317 13:43:03.071613  665948 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:43:03.073076  665948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:43:03.074551  665948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
	I0317 13:43:03.075933  665948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
	I0317 13:43:03.077127  665948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:43:03.078506  665948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:43:03.080020  665948 config.go:182] Loaded profile config "force-systemd-env-662195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:43:03.080146  665948 config.go:182] Loaded profile config "kubernetes-upgrade-312638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0317 13:43:03.080246  665948 config.go:182] Loaded profile config "pause-880805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0317 13:43:03.080364  665948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:43:03.116843  665948 out.go:177] * Using the kvm2 driver based on user configuration
	I0317 13:43:03.118072  665948 start.go:297] selected driver: kvm2
	I0317 13:43:03.118090  665948 start.go:901] validating driver "kvm2" against <nil>
	I0317 13:43:03.118104  665948 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:43:03.119974  665948 out.go:201] 
	W0317 13:43:03.121039  665948 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0317 13:43:03.122188  665948 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-788750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-788750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-788750"

                                                
                                                
----------------------- debugLogs end: false-788750 [took: 2.87999191s] --------------------------------
helpers_test.go:175: Cleaning up "false-788750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-788750
--- PASS: TestNetworkPlugins/group/false (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-142429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-142429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m30.184030421s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0317 13:46:55.641975  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:47:12.573238  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (58.899227648s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-142429 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1d79e2d-07be-4cf0-ba02-d4d3bdb00851] Pending
helpers_test.go:344: "busybox" [e1d79e2d-07be-4cf0-ba02-d4d3bdb00851] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1d79e2d-07be-4cf0-ba02-d4d3bdb00851] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004066171s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-142429 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-142429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-142429 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-142429 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-142429 --alsologtostderr -v=3: (1m31.131852713s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-304104 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5d4f0271-cca5-4e67-bfeb-7f497b08a343] Pending
helpers_test.go:344: "busybox" [5d4f0271-cca5-4e67-bfeb-7f497b08a343] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5d4f0271-cca5-4e67-bfeb-7f497b08a343] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.002882622s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-304104 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-304104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-304104 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-304104 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-304104 --alsologtostderr -v=3: (1m31.152016668s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064245 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0317 13:48:44.452923  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064245 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (56.634684907s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-142429 -n no-preload-142429
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-142429 -n no-preload-142429: exit status 7 (67.033846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-142429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (386.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-142429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-142429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (6m26.177373472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-142429 -n no-preload-142429
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (386.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-064245 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [993165f9-476f-4a1b-9cf3-3d82e4d65355] Pending
helpers_test.go:344: "busybox" [993165f9-476f-4a1b-9cf3-3d82e4d65355] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [993165f9-476f-4a1b-9cf3-3d82e4d65355] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003652402s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-064245 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-064245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-064245 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-064245 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-064245 --alsologtostderr -v=3: (1m30.805420257s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304104 -n embed-certs-304104
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304104 -n embed-certs-304104: exit status 7 (68.306587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-304104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (295.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (4m55.552739643s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304104 -n embed-certs-304104
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (295.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245: exit status 7 (78.234472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-064245 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064245 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064245 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (4m59.655439942s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-803027 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-803027 --alsologtostderr -v=3: (4.386959813s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-803027 -n old-k8s-version-803027: exit status 7 (72.089568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-803027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5w7pd" [73767e05-1bf7-4868-add0-0d4e20d130d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004468613s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5w7pd" [73767e05-1bf7-4868-add0-0d4e20d130d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003723279s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-304104 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-304104 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-304104 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304104 -n embed-certs-304104
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304104 -n embed-certs-304104: exit status 2 (285.232811ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304104 -n embed-certs-304104
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304104 -n embed-certs-304104: exit status 2 (281.491714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-304104 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304104 -n embed-certs-304104
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-304104 -n embed-certs-304104
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-447293 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0317 13:55:07.529431  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-447293 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (48.212984603s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-447293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-447293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068097768s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-kgcsm" [53b1f31c-3a46-4310-8e04-651505c716f7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-kgcsm" [53b1f31c-3a46-4310-8e04-651505c716f7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005657656s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-447293 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-447293 --alsologtostderr -v=3: (10.526958801s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447293 -n newest-cni-447293
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447293 -n newest-cni-447293: exit status 7 (79.612546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-447293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-447293 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-447293 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (37.538241384s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447293 -n newest-cni-447293
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-kgcsm" [53b1f31c-3a46-4310-8e04-651505c716f7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004365085s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-142429 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-142429 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-142429 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-142429 -n no-preload-142429
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-142429 -n no-preload-142429: exit status 2 (250.432224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-142429 -n no-preload-142429
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-142429 -n no-preload-142429: exit status 2 (243.746747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-142429 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-142429 -n no-preload-142429
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-142429 -n no-preload-142429
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bkfp6" [cb682152-77dd-4f7b-82e0-d99a4dcef7ea] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003548581s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m12.534153982s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bkfp6" [cb682152-77dd-4f7b-82e0-d99a4dcef7ea] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003906516s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-064245 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-064245 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-064245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245: exit status 2 (237.088282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245: exit status 2 (233.190665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-064245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064245 -n default-k8s-diff-port-064245
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m36.025701958s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-447293 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-447293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447293 -n newest-cni-447293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447293 -n newest-cni-447293: exit status 2 (242.905029ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447293 -n newest-cni-447293
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447293 -n newest-cni-447293: exit status 2 (231.908681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-447293 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447293 -n newest-cni-447293
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447293 -n newest-cni-447293
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (113.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m53.146937337s)
--- PASS: TestNetworkPlugins/group/calico/Start (113.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-788750 "pgrep -a kubelet"
I0317 13:57:03.518280  629188 config.go:182] Loaded profile config "auto-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hnxwq" [850d45f1-e9ea-4319-80eb-478b5c575d2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hnxwq" [850d45f1-e9ea-4319-80eb-478b5c575d2f] Running
E0317 13:57:12.572952  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/functional-141794/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.003749514s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.159550447s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nb9sw" [5d30c4ef-9e82-49f2-b21e-537e2c96e622] Running
E0317 13:57:40.021997  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/no-preload-142429/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004117764s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-788750 "pgrep -a kubelet"
I0317 13:57:44.409309  629188 config.go:182] Loaded profile config "kindnet-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jd8gd" [0df83eac-e7c3-4b8a-94f9-f34c8f57a80b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jd8gd" [0df83eac-e7c3-4b8a-94f9-f34c8f57a80b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003651291s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (54.025190319s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7sw85" [bee908c8-768d-4957-8c27-b475ad17c174] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004852632s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-788750 "pgrep -a kubelet"
I0317 13:58:21.747891  629188 config.go:182] Loaded profile config "calico-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w8qrw" [48fe5ea1-f3d6-4bce-8f7f-145f6ed30138] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w8qrw" [48fe5ea1-f3d6-4bce-8f7f-145f6ed30138] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004266761s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-788750 "pgrep -a kubelet"
I0317 13:58:47.409620  629188 config.go:182] Loaded profile config "custom-flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rhd5w" [65388ee2-b011-4d01-9298-647397553736] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-rhd5w" [65388ee2-b011-4d01-9298-647397553736] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003448179s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.600941897s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-788750 "pgrep -a kubelet"
E0317 13:59:06.683176  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
I0317 13:59:06.696128  629188 config.go:182] Loaded profile config "enable-default-cni-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qks4l" [45cf4f33-ca5d-4712-ac35-7368953d4c7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 13:59:09.244670  629188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/default-k8s-diff-port-064245/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-qks4l" [45cf4f33-ca5d-4712-ac35-7368953d4c7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003847604s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-788750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m2.26292882s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-28ssr" [9f658e75-17e6-4f0c-9b90-b8ecd8266540] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0035633s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-788750 "pgrep -a kubelet"
I0317 14:00:10.795435  629188 config.go:182] Loaded profile config "flannel-788750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-d6hrs" [791b8d59-1cf3-41e1-8035-343b322872af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-d6hrs" [791b8d59-1cf3-41e1-8035-343b322872af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004463659s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-788750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-788750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4vm2d" [abc9d3a4-9206-4813-98d4-d89ae1732868] Pending
helpers_test.go:344: "netcat-5d86dc444-4vm2d" [abc9d3a4-9206-4813-98d4-d89ae1732868] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4vm2d" [abc9d3a4-9206-4813-98d4-d89ae1732868] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004570975s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-788750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-788750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (35/322)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012915 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-957562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-957562
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-788750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-788750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-788750"

                                                
                                                
----------------------- debugLogs end: kubenet-788750 [took: 2.771075615s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-788750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-788750
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-788750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-788750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-788750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-788750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-788750"

                                                
                                                
----------------------- debugLogs end: cilium-788750 [took: 3.132192816s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-788750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-788750
--- SKIP: TestNetworkPlugins/group/cilium (3.27s)

                                                
                                    
Copied to clipboard